# SciPy Reference Guide

Release 0.10.0rc1

Written by the SciPy community

November 03, 2011

CONTENTS

1

SciPy Tutorial 1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . 1.2 Basic functions in Numpy (and top-level scipy) . . . . . . 1.3 Special functions (scipy.special) . . . . . . . . . . 1.4 Integration (scipy.integrate) . . . . . . . . . . . . 1.5 Optimization (scipy.optimize) . . . . . . . . . . . . 1.6 Interpolation (scipy.interpolate) . . . . . . . . . 1.7 Fourier Transforms (scipy.fftpack) . . . . . . . . . 1.8 Signal Processing (scipy.signal) . . . . . . . . . . . 1.9 Linear Algebra (scipy.linalg) . . . . . . . . . . . . 1.10 Sparse Eigenvalue Problems with ARPACK . . . . . . . . 1.11 Statistics (scipy.stats) . . . . . . . . . . . . . . . . 1.12 Multi-dimensional image processing (scipy.ndimage) 1.13 File IO (scipy.io) . . . . . . . . . . . . . . . . . . . . 1.14 Weave (scipy.weave) . . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

3 . 3 . 6 . 10 . 11 . 15 . 29 . 40 . 42 . 49 . 60 . 63 . 72 . 94 . 100

2

API - importing from Scipy 135 2.1 Guidelines for importing functions from Scipy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 2.2 API deﬁnition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136 Release Notes 3.1 SciPy 0.10.0 Release Notes 3.2 SciPy 0.9.0 Release Notes . 3.3 SciPy 0.8.0 Release Notes . 3.4 SciPy 0.7.2 Release Notes . 3.5 SciPy 0.7.1 Release Notes . 3.6 SciPy 0.7.0 Release Notes . 139 139 143 146 151 151 153 159 159 159 164 181 197 211 228 258 265 302

3

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

4

Reference 4.1 Clustering package (scipy.cluster) . . . . . . . . . . . . . . . 4.2 K-means clustering and vector quantization (scipy.cluster.vq) 4.3 Hierarchical clustering (scipy.cluster.hierarchy) . . . . . 4.4 Constants (scipy.constants) . . . . . . . . . . . . . . . . . . . 4.5 Discrete Fourier transforms (scipy.fftpack) . . . . . . . . . . . 4.6 Integration and ODEs (scipy.integrate) . . . . . . . . . . . . 4.7 Interpolation (scipy.interpolate) . . . . . . . . . . . . . . . 4.8 Input and output (scipy.io) . . . . . . . . . . . . . . . . . . . . . 4.9 Linear algebra (scipy.linalg) . . . . . . . . . . . . . . . . . . 4.10 Maximum entropy models (scipy.maxentropy) . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

. . . . . . . . . .

i

4.11 4.12 4.13 4.14 4.15 4.16 4.17 4.18 4.19 4.20 4.21 4.22 4.23 4.24

Miscellaneous routines (scipy.misc) . . . . . . . . . . . . . . Multi-dimensional image processing (scipy.ndimage) . . . . . Orthogonal distance regression (scipy.odr) . . . . . . . . . . . Optimization and root ﬁnding (scipy.optimize) . . . . . . . . Nonlinear solvers . . . . . . . . . . . . . . . . . . . . . . . . . . . Signal processing (scipy.signal) . . . . . . . . . . . . . . . . Sparse matrices (scipy.sparse) . . . . . . . . . . . . . . . . . Sparse linear algebra (scipy.sparse.linalg) . . . . . . . . Spatial algorithms and data structures (scipy.spatial) . . . . Distance computations (scipy.spatial.distance) . . . . . Special functions (scipy.special) . . . . . . . . . . . . . . . Statistical functions (scipy.stats) . . . . . . . . . . . . . . . Statistical functions for masked arrays (scipy.stats.mstats) C/C++ integration (scipy.weave) . . . . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

. . . . . . . . . . . . . .

308 316 377 386 432 434 465 489 513 532 548 573 848 878 883 891 893

Bibliography Python Module Index Index

ii

SciPy Reference Guide, Release 0.10.0rc1

Release 0.10 Date November 03, 2011 SciPy (pronounced “Sigh Pie”) is open-source software for mathematics, science, and engineering.

CONTENTS

1

SciPy Reference Guide, Release 0.10.0rc1

2

CONTENTS

CHAPTER

ONE

SCIPY TUTORIAL

1.1 Introduction

Contents • Introduction – SciPy Organization – Finding Documentation SciPy is a collection of mathematical algorithms and convenience functions built on the Numpy extension for Python. It adds signiﬁcant power to the interactive Python session by exposing the user to high-level commands and classes for the manipulation and visualization of data. With SciPy, an interactive Python session becomes a data-processing and system-prototyping environment rivaling sytems such as MATLAB, IDL, Octave, R-Lab, and SciLab. The additional power of using SciPy within Python, however, is that a powerful programming language is also available for use in developing sophisticated programs and specialized applications. Scientiﬁc applications written in SciPy beneﬁt from the development of additional modules in numerous niche’s of the software landscape by developers across the world. Everything from parallel programming to web and data-base subroutines and classes have been made available to the Python programmer. All of this power is available in addition to the mathematical libraries in SciPy. This document provides a tutorial for the ﬁrst-time user of SciPy to help get started with some of the features available in this powerful package. It is assumed that the user has already installed the package. Some general Python facility is also assumed such as could be acquired by working through the Tutorial in the Python distribution. For further introductory help the user is directed to the Numpy documentation. For brevity and convenience, we will often assume that the main packages (numpy, scipy, and matplotlib) have been imported as:

>>> >>> >>> >>> import import import import numpy as np scipy as sp matplotlib as mpl matplotlib.pyplot as plt

These are the import conventions that our community has adopted after discussion on public mailing lists. You will see these conventions used throughout NumPy and SciPy source code and documentation. While we obviously don’t require you to follow these conventions in your own code, it is highly recommended.

3

SciPy Reference Guide, Release 0.10.0rc1

1.1.1 SciPy Organization

SciPy is organized into subpackages covering different scientiﬁc computing domains. These are summarized in the following table: Subpackage cluster constants fftpack integrate interpolate io linalg maxentropy ndimage odr optimize signal sparse spatial special stats weave Description Clustering algorithms Physical and mathematical constants Fast Fourier Transform routines Integration and ordinary differential equation solvers Interpolation and smoothing splines Input and Output Linear algebra Maximum entropy methods N-dimensional image processing Orthogonal distance regression Optimization and root-ﬁnding routines Signal processing Sparse matrices and associated routines Spatial data structures and algorithms Special functions Statistical distributions and functions C/C++ integration

**Scipy sub-packages need to be imported separately, for example:
**

>>> from scipy import linalg, optimize

Because of their ubiquitousness, some of the functions in these subpackages are also made available in the scipy namespace to ease their use in interactive sessions and programs. In addition, many basic array functions from numpy are also available at the top-level of the scipy package. Before looking at the sub-packages individually, we will ﬁrst look at some of these common functions.

1.1.2 Finding Documentation

Scipy and Numpy have HTML and PDF versions of their documentation available at http://docs.scipy.org/, which currently details nearly all available functionality. However, this documentation is still work-in-progress, and some parts may be incomplete or sparse. As we are a volunteer organization and depend on the community for growth, your participation - everything from providing feedback to improving the documentation and code - is welcome and actively encouraged. Python also provides the facility of documentation strings. The functions and classes available in SciPy use this method for on-line documentation. There are two methods for reading these messages and getting help. Python provides the command help in the pydoc module. Entering this command with no arguments (i.e. >>> help ) launches an interactive help session that allows searching through the keywords and modules available to all of Python. Running the command help with an object as the argument displays the calling signature, and the documentation string of the object. The pydoc method of help is sophisticated but uses a pager to display the text. Sometimes this can interfere with the terminal you are running the interactive session within. A scipy-speciﬁc help system is also available under the command sp.info. The signature and documentation string for the object passed to the help command are printed to standard output (or to a writeable object passed as the third argument). The second keyword argument of sp.info deﬁnes the maximum width of the line for printing. If a module is passed as the argument to help than a list of the functions and classes deﬁned in that module is printed. For example:

4

Chapter 1. SciPy Tutorial

SciPy Reference Guide, Release 0.10.0rc1

>>> sp.info(optimize.fmin) fmin(func, x0, args=(), xtol=0.0001, ftol=0.0001, maxiter=None, maxfun=None, full_output=0, disp=1, retall=0, callback=None) Minimize a function using the downhill simplex algorithm. Parameters ---------func : callable func(x,*args) The objective function to be minimized. x0 : ndarray Initial guess. args : tuple Extra arguments passed to func, i.e. ‘‘f(x,*args)‘‘. callback : callable Called after each iteration, as callback(xk), where xk is the current parameter vector. Returns ------xopt : ndarray Parameter that minimizes function. fopt : float Value of function at minimum: ‘‘fopt = func(xopt)‘‘. iter : int Number of iterations performed. funcalls : int Number of function calls made. warnflag : int 1 : Maximum number of function evaluations made. 2 : Maximum number of iterations reached. allvecs : list Solution at each iteration. Other parameters ---------------xtol : float Relative error ftol : number Relative error maxiter : int Maximum number maxfun : number Maximum number full_output : bool Set to True if disp : bool Set to True to retall : bool Set to True to

in xopt acceptable for convergence. in func(xopt) acceptable for convergence. of iterations to perform. of function evaluations to make. fopt and warnflag outputs are desired. print convergence messages. return list of solutions at each iteration.

Notes ----Uses a Nelder-Mead simplex algorithm to find the minimum of function of one or more variables.

Another useful command is source. When given a function written in Python as an argument, it prints out a listing of the source code for that function. This can be helpful in learning about an algorithm or understanding exactly what

1.1. Introduction

5

SciPy Reference Guide, Release 0.10.0rc1

a function is doing with its arguments. Also don’t forget about the Python command dir which can be used to look at the namespace of a module or package.

**1.2 Basic functions in Numpy (and top-level scipy)
**

Contents • Basic functions in Numpy (and top-level scipy) – Interaction with Numpy – Top-level scipy routines * Type handling * Index Tricks * Shape manipulation * Polynomials * Vectorizing functions (vectorize) * Other useful functions – Common functions

**1.2.1 Interaction with Numpy
**

To begin with, all of the Numpy functions have been subsumed into the scipy namespace so that all of those functions are available without additionally importing Numpy. In addition, the universal functions (addition, subtraction, division) have been altered to not raise exceptions if ﬂoating-point errors are encountered; instead, NaN’s and Inf’s are returned in the arrays. To assist in detection of these events, several functions (sp.isnan, sp.isfinite, sp.isinf) are available. Finally, some of the basic functions like log, sqrt, and inverse trig functions have been modiﬁed to return complex numbers instead of NaN’s where appropriate (i.e. sp.sqrt(-1) returns 1j).

**1.2.2 Top-level scipy routines
**

The purpose of the top level of scipy is to collect general-purpose routines that the other sub-packages can use and to provide a simple replacement for Numpy. Anytime you might think to import Numpy, you can import scipy instead and remove yourself from direct dependence on Numpy. These routines are divided into several ﬁles for organizational purposes, but they are all available under the numpy namespace (and the scipy namespace). There are routines for type handling and type checking, shape and matrix manipulation, polynomial processing, and other useful functions. Rather than giving a detailed description of each of these functions (which is available in the Numpy Reference Guide or by using the help, info and source commands), this tutorial will discuss some of the more useful commands which require a little introduction to use to their full potential. Type handling Note the difference between sp.iscomplex/sp.isreal and sp.iscomplexobj/sp.isrealobj. The former command is array based and returns byte arrays of ones and zeros providing the result of the element-wise test. The latter command is object based and returns a scalar describing the result of the test on the entire object. Often it is required to get just the real and/or imaginary part of a complex number. While complex numbers and arrays have attributes that return those values, if one is not sure whether or not the object will be complex-valued, it is better to use the functional forms sp.real and sp.imag . These functions succeed for anything that can be turned into

6

Chapter 1. SciPy Tutorial

SciPy Reference Guide, Release 0.10.0rc1

a Numpy array. Consider also the function sp.real_if_close which transforms a complex-valued number with tiny imaginary part into a real number. Occasionally the need to check whether or not a number is a scalar (Python (long)int, Python ﬂoat, Python complex, or rank-0 array) occurs in coding. This functionality is provided in the convenient function sp.isscalar which returns a 1 or a 0. Finally, ensuring that objects are a certain Numpy type occurs often enough that it has been given a convenient interface in SciPy through the use of the sp.cast dictionary. The dictionary is keyed by the type it is desired to cast to and the dictionary stores functions to perform the casting. Thus, sp.cast[’f’](d) returns an array of sp.float32 from d. This function is also useful as an easy way to get a scalar of a certain type:

>>> sp.cast[’f’](sp.pi) array(3.1415927410125732, dtype=float32)

Index Tricks There are some class instances that make special use of the slicing functionality to provide efﬁcient means for array construction. This part will discuss the operation of sp.mgrid , sp.ogrid , sp.r_ , and sp.c_ for quickly constructing arrays. One familiar with MATLAB (R) may complain that it is difﬁcult to construct arrays from the interactive session with Python. Suppose, for example that one wants to construct an array that begins with 3 followed by 5 zeros and then contains 10 numbers spanning the range -1 to 1 (inclusive on both ends). Before SciPy, you would need to enter something like the following

>>> concatenate(([3],[0]*5,arange(-1,1.002,2/9.0)))

**With the r_ command one can enter this as
**

>>> r_[3,[0]*5,-1:1:10j]

which can ease typing and make for more readable code. Notice how objects are concatenated, and the slicing syntax is (ab)used to construct ranges. The other term that deserves a little explanation is the use of the complex number 10j as the step size in the slicing syntax. This non-standard use allows the number to be interpreted as the number of points to produce in the range rather than as a step size (note we would have used the long integer notation, 10L, but this notation may go away in Python as the integers become uniﬁed). This non-standard usage may be unsightly to some, but it gives the user the ability to quickly construct complicated vectors in a very readable fashion. When the number of points is speciﬁed in this way, the end- point is inclusive. The “r” stands for row concatenation because if the objects between commas are 2 dimensional arrays, they are stacked by rows (and thus must have commensurate columns). There is an equivalent command c_ that stacks 2d arrays by columns but works identically to r_ for 1d arrays. Another very useful class instance which makes use of extended slicing notation is the function mgrid. In the simplest case, this function can be used to construct 1d ranges as a convenient substitute for arange. It also allows the use of complex-numbers in the step-size to indicate the number of points to place between the (inclusive) end-points. The real purpose of this function however is to produce N, N-d arrays which provide coordinate arrays for an N-dimensional volume. The easiest way to understand this is with an example of its usage:

>>> mgrid[0:5,0:5] array([[[0, 0, 0, 0, [1, 1, 1, 1, [2, 2, 2, 2, [3, 3, 3, 3, [4, 4, 4, 4, [[0, 1, 2, 3, [0, 1, 2, 3, 0], 1], 2], 3], 4]], 4], 4],

1.2. Basic functions in Numpy (and top-level scipy)

7

SciPy Reference Guide, Release 0.10.0rc1

[0, 1, 2, 3, 4], [0, 1, 2, 3, 4], [0, 1, 2, 3, 4]]]) >>> mgrid[0:5:4j,0:5:4j] array([[[ 0. , 0. , [ 1.6667, 1.6667, [ 3.3333, 3.3333, [ 5. , 5. , [[ 0. , 1.6667, [ 0. , 1.6667, [ 0. , 1.6667, [ 0. , 1.6667,

0. , 1.6667, 3.3333, 5. , 3.3333, 3.3333, 3.3333, 3.3333,

0. ], 1.6667], 3.3333], 5. ]], 5. ], 5. ], 5. ], 5. ]]])

Having meshed arrays like this is sometimes very useful. However, it is not always needed just to evaluate some N-dimensional function over a grid due to the array-broadcasting rules of Numpy and SciPy. If this is the only purpose for generating a meshgrid, you should instead use the function ogrid which generates an “open “grid using NewAxis judiciously to create N, N-d arrays where only one dimension in each array has length greater than 1. This will save memory and create the same result if the only purpose for the meshgrid is to generate sample points for evaluation of an N-d function. Shape manipulation In this category of functions are routines for squeezing out length- one dimensions from N-dimensional arrays, ensuring that an array is at least 1-, 2-, or 3-dimensional, and stacking (concatenating) arrays by rows, columns, and “pages “(in the third dimension). Routines for splitting arrays (roughly the opposite of stacking arrays) are also available. Polynomials There are two (interchangeable) ways to deal with 1-d polynomials in SciPy. The ﬁrst is to use the poly1d class from Numpy. This class accepts coefﬁcients or polynomial roots to initialize a polynomial. The polynomial object can then be manipulated in algebraic expressions, integrated, differentiated, and evaluated. It even prints like a polynomial:

>>> p = poly1d([3,4,5]) >>> print p 2 3 x + 4 x + 5 >>> print p*p 4 3 2 9 x + 24 x + 46 x + 40 x + 25 >>> print p.integ(k=6) 3 2 x + 2 x + 5 x + 6 >>> print p.deriv() 6 x + 4 >>> p([4,5]) array([ 69, 100])

The other way to handle polynomials is as an array of coefﬁcients with the ﬁrst element of the array giving the coefﬁcient of the highest power. There are explicit functions to add, subtract, multiply, divide, integrate, differentiate, and evaluate polynomials represented as sequences of coefﬁcients. Vectorizing functions (vectorize) One of the features that NumPy provides is a class vectorize to convert an ordinary Python function which accepts scalars and returns scalars into a “vectorized-function” with the same broadcasting rules as other Numpy functions 8 Chapter 1. SciPy Tutorial

SciPy Reference Guide, Release 0.10.0rc1

(i.e. the Universal functions, or ufuncs). For example, suppose you have a Python function named addsubtract deﬁned as:

>>> def addsubtract(a,b): ... if a > b: ... return a - b ... else: ... return a + b

which deﬁnes a function of two scalar variables and returns a scalar result. The class vectorize can be used to “vectorize “this function so that

>>> vec_addsubtract = vectorize(addsubtract)

**returns a function which takes array arguments and returns an array result:
**

>>> vec_addsubtract([0,3,6,9],[1,3,5,7]) array([1, 6, 1, 2])

This particular function could have been written in vector form without the use of vectorize . But, what if the function you have written is the result of some optimization or integration routine. Such functions can likely only be vectorized using vectorize. Other useful functions There are several other functions in the scipy_base package including most of the other functions that are also in the Numpy package. The reason for duplicating these functions is to allow SciPy to potentially alter their original interface and make it easier for users to know how to get access to functions

>>> from scipy import *

Functions which should be mentioned are mod(x,y) which can replace x % y when it is desired that the result take the sign of y instead of x . Also included is fix which always rounds to the nearest integer towards zero. For doing phase processing, the functions angle, and unwrap are also useful. Also, the linspace and logspace functions return equally spaced samples in a linear or log scale. Finally, it’s useful to be aware of the indexing capabilities of Numpy. Mention should be made of the new function select which extends the functionality of where to include multiple conditions and multiple choices. The calling convention is select(condlist,choicelist,default=0). select is a vectorized form of the multiple if-statement. It allows rapid construction of a function which returns an array of results based on a list of conditions. Each element of the return array is taken from the array in a choicelist corresponding to the ﬁrst condition in condlist that is true. For example

>>> x = r_[-2:3] >>> x array([-2, -1, 0, 1, 2]) >>> select([x > 3, x >= 0],[0,x+2]) array([0, 0, 2, 3, 4])

1.2.3 Common functions

Some functions depend on sub-packages of SciPy but should be available from the top-level of SciPy due to their common use. These are functions that might have been placed in scipy_base except for their dependence on other sub-packages of SciPy. For example the factorial and comb functions compute n! and n!/k!(n − k)! using either exact integer arithmetic (thanks to Python’s Long integer object), or by using ﬂoating-point precision and the gamma function. The functions rand and randn are used so often that they warranted a place at the top level. There are

1.2. Basic functions in Numpy (and top-level scipy)

9

SciPy Reference Guide, Release 0.10.0rc1

convenience functions for the interactive use: disp (similar to print), and who (returns a list of deﬁned variables and memory consumption–upper bounded). Another function returns a common image used in image processing: lena. Finally, two functions are provided that are useful for approximating derivatives of functions using discrete-differences. The function central_diff_weights returns weighting coefﬁcients for an equally-spaced N -point approximation to the derivative of order o. These weights must be multiplied by the function corresponding to these points and the results added to obtain the derivative approximation. This function is intended for use when only samples of the function are avaiable. When the function is an object that can be handed to a routine and evaluated, the function derivative can be used to automatically evaluate the object at the correct points to obtain an N-point approximation to the o-th derivative at a given point.

**1.3 Special functions (scipy.special)
**

The main feature of the scipy.special package is the deﬁnition of numerous special functions of mathematical physics. Available functions include airy, elliptic, bessel, gamma, beta, hypergeometric, parabolic cylinder, mathieu, spheroidal wave, struve, and kelvin. There are also some low-level stats functions that are not intended for general use as an easier interface to these functions is provided by the stats module. Most of these functions can take array arguments and return array results following the same broadcasting rules as other math functions in Numerical Python. Many of these functions also accept complex numbers as input. For a complete list of the available functions with a one-line description type >>> help(special). Each function also has its own documentation accessible using help. If you don’t see a function you need, consider writing it and contributing it to the library. You can write the function in either C, Fortran, or Python. Look in the source code of the library for examples of each of these kinds of functions.

**1.3.1 Bessel functions of real order(jn, jn_zeros)
**

Bessel functions are a family of solutions to Bessel’s differential equation with real or complex order alpha: x2 dy d2 y +x + (x2 − α2 )y = 0 2 dx dx

Among other uses, these functions arise in wave propagation problems such as the vibrational modes of a thin drum head. Here is an example of a circular drum head anchored at the edge:

>>> >>> >>> ... ... >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> from scipy import * from scipy.special import jn, jn_zeros def drumhead_height(n, k, distance, angle, t): nth_zero = jn_zeros(n, k) return cos(t)*cos(n*angle)*jn(n, distance*nth_zero) theta = r_[0:2*pi:50j] radius = r_[0:1:50j] x = array([r*cos(theta) for r in radius]) y = array([r*sin(theta) for r in radius]) z = array([drumhead_height(1, 1, r, theta, 0.5) for r in radius]) import pylab from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm fig = pylab.figure() ax = Axes3D(fig) ax.plot_surface(x, y, z, rstride=1, cstride=1, cmap=cm.jet) ax.set_xlabel(’X’) ax.set_ylabel(’Y’)

10

Chapter 1. SciPy Tutorial

SciPy Reference Guide, Release 0.10.0rc1

>>> ax.set_zlabel(’Z’) >>> pylab.show()

0.4 0.2 0.0 Z 0.2 0.4 0.0Y 0.5

0.5

0.0 X

0.5

0.5 1.0

**1.4 Integration (scipy.integrate)
**

The scipy.integrate sub-package provides several integration techniques including an ordinary differential equation integrator. An overview of the module is provided by the help command:

>>> help(integrate) Methods for Integrating Functions given function object. quad dblquad tplquad fixed_quad quadrature romberg ------General purpose integration. General purpose double integration. General purpose triple integration. Integrate func(x) using Gaussian quadrature of order n. Integrate with given tolerance using Gaussian quadrature. Integrate func using Romberg integration.

Methods for Integrating Functions given fixed samples. trapz cumtrapz simps romb ----Use trapezoidal rule to compute integral from samples. Use trapezoidal rule to cumulatively compute integral. Use Simpson’s rule to compute integral from samples. Use Romberg Integration to compute integral from (2**k + 1) evenly-spaced samples.

See the special module’s orthogonal polynomials (special) for Gaussian quadrature roots and weights for other weighting factors and regions. Interface to numerical integrators of ODE systems. odeint ode -- General integration of ordinary differential equations. -- Integrate ODE using VODE and ZVODE routines.

1.4. Integration (scipy.integrate)

11

SciPy Reference Guide, Release 0.10.0rc1

**1.4.1 General integration (quad)
**

The function quad is provided to integrate a function of one variable between two points. The points can be ±∞ (± inf) to indicate inﬁnite limits. For example, suppose you wish to integrate a bessel function jv(2.5,x) along the interval [0, 4.5].

4.5

I=

0

J2.5 (x) dx.

**This could be computed using quad:
**

>>> result = integrate.quad(lambda x: special.jv(2.5,x), 0, 4.5) >>> print result (1.1178179380783249, 7.8663172481899801e-09) >>> I = sqrt(2/pi)*(18.0/27*sqrt(2)*cos(4.5)-4.0/27*sqrt(2)*sin(4.5)+ sqrt(2*pi)*special.fresnel(3/sqrt(pi))[0]) >>> print I 1.117817938088701 >>> print abs(result[0]-I) 1.03761443881e-11

The ﬁrst argument to quad is a “callable” Python object (i.e a function, method, or class instance). Notice the use of a lambda- function in this case as the argument. The next two arguments are the limits of integration. The return value is a tuple, with the ﬁrst element holding the estimated value of the integral and the second element holding an upper bound on the error. Notice, that in this case, the true value of this integral is I= where Si (x) =

0

2 π

**√ 4√ 18 √ 2 cos (4.5) − 2 sin (4.5) + 2πSi 27 27
**

x

3 √ π

,

sin

π 2 t dt. 2

is the Fresnel sine integral. Note that the numerically-computed integral is within 1.04 × 10−11 of the exact result — well below the reported error bound. Inﬁnite inputs are also allowed in quad by using ± inf as one of the arguments. For example, suppose that a numerical value for the exponential integral:

∞

En (x) =

1

e−xt dt. tn

is desired (and the fact that this integral can be computed as special.expn(n,x) is forgotten). The functionality of the function special.expn can be replicated by deﬁning a new function vec_expint based on the routine quad:

>>> from scipy.integrate import quad >>> def integrand(t,n,x): ... return exp(-x*t) / t**n >>> def expint(n,x): ... return quad(integrand, 1, Inf, args=(n, x))[0] >>> vec_expint = vectorize(expint) >>> vec_expint(3,arange(1.0,4.0,0.5)) array([ 0.1097, 0.0567, 0.0301, 0.0163, >>> special.expn(3,arange(1.0,4.0,0.5)) array([ 0.1097, 0.0567, 0.0301, 0.0163,

0.0089, 0.0089,

0.0049]) 0.0049])

12

Chapter 1. SciPy Tutorial

SciPy Reference Guide, Release 0.10.0rc1

The function which is integrated can even use the quad argument (though the error bound may underestimate the error due to possible numerical error in the integrand from the use of quad ). The integral in this case is

∞ ∞ 1

In =

0

1 e−xt dt dx = . n t n

>>> result = quad(lambda x: expint(3, x), 0, inf) >>> print result (0.33333333324560266, 2.8548934485373678e-09) >>> I3 = 1.0/3.0 >>> print I3 0.333333333333 >>> print I3 - result[0] 8.77306560731e-11

This last example shows that multiple integration can be handled using repeated calls to quad. The mechanics of this for double and triple integration have been wrapped up into the functions dblquad and tplquad. The function, dblquad performs double integration. Use the help function to be sure that the arguments are deﬁned in the correct order. In addition, the limits on all inner integrals are actually functions which can be constant functions. An example of using double integration to compute several values of In is shown below:

>>> from scipy.integrate import quad, dblquad >>> def I(n): ... return dblquad(lambda t, x: exp(-x*t)/t**n, 0, Inf, lambda x: 1, lambda x: Inf) >>> print I(4) (0.25000000000435768, 1.0518245707751597e-09) >>> print I(3) (0.33333333325010883, 2.8604069919261191e-09) >>> print I(2) (0.49999999999857514, 1.8855523253868967e-09)

**1.4.2 Gaussian quadrature (integrate.gauss_quadtol)
**

A few functions are also provided in order to perform simple Gaussian quadrature over a ﬁxed interval. The ﬁrst is fixed_quad which performs ﬁxed-order Gaussian quadrature. The second function is quadrature which performs Gaussian quadrature of multiple orders until the difference in the integral estimate is beneath some tolerance supplied by the user. These functions both use the module special.orthogonal which can calculate the roots and quadrature weights of a large variety of orthogonal polynomials (the polynomials themselves are available as special functions returning instances of the polynomial class — e.g. special.legendre).

**1.4.3 Integrating using samples
**

There are three functions for computing integrals given only samples: trapz , simps, and romb . The ﬁrst two functions use Newton-Coates formulas of order 1 and 2 respectively to perform integration. These two functions can handle, non-equally-spaced samples. The trapezoidal rule approximates the function as a straight line between adjacent points, while Simpson’s rule approximates the function between three adjacent points as a parabola. If the samples are equally-spaced and the number of samples available is 2k + 1 for some integer k, then Romberg integration can be used to obtain high-precision estimates of the integral using the available samples. Romberg integration uses the trapezoid rule at step-sizes related by a power of two and then performs Richardson extrapolation on these estimates to approximate the integral with a higher-degree of accuracy. (A different interface to Romberg integration useful when the function can be provided is also available as romberg). 1.4. Integration (scipy.integrate) 13

SciPy Reference Guide, Release 0.10.0rc1

**1.4.4 Ordinary differential equations (odeint)
**

Integrating a set of ordinary differential equations (ODEs) given initial conditions is another useful example. The function odeint is available in SciPy for integrating a ﬁrst-order vector differential equation: dy = f (y, t) , dt given initial conditions y (0) = y0 , where y is a length N vector and f is a mapping from RN to RN . A higher-order ordinary differential equation can always be reduced to a differential equation of this type by introducing intermediate derivatives into the y vector. For example suppose it is desired to ﬁnd the solution to the following second-order differential equation: d2 w − zw(z) = 0 dz 2

1 = − √3Γ 1 . It is known that the solution to this differential 3 (3) equation with these boundary conditions is the Airy function

with initial conditions w (0) =

√ 1 2 3 2 3 Γ( 3 )

and

dw dz z=0

w = Ai (z) , which gives a means to check the integrator using special.airy. First, convert this ODE into standard form by setting y = dy = dt In other words, f (y, t) = A (t) y. As an interesting reminder, if A (t) commutes with 0 A (τ ) dτ under matrix multiplication, then this linear differential equation has an exact solution using the matrix exponential:

t t dw dz , w

and t = z. Thus, the differential equation becomes = 0 1 t 0 y.

ty1 y0

=

0 1

t 0

y0 y1

y (t) = exp

0

A (τ ) dτ

y (0) ,

However, in this case, A (t) and its integral do not commute. There are many optional inputs and outputs available when using odeint which can help tune the solver. These additional inputs and outputs are not needed much of the time, however, and the three required input arguments and the output solution sufﬁce. The required inputs are the function deﬁning the derivative, fprime, the initial conditions vector, y0, and the time points to obtain a solution, t, (with the initial value point as the ﬁrst element of this sequence). The output to odeint is a matrix where each row contains the solution vector at each requested time point (thus, the initial conditions are given in the ﬁrst output row). The following example illustrates the use of odeint including the usage of the Dfun option which allows the user to specify a gradient (with respect to y ) of the function, f (y, t).

>>> >>> >>> >>> >>> >>> ... from scipy.integrate import odeint from scipy.special import gamma, airy y1_0 = 1.0/3**(2.0/3.0)/gamma(2.0/3.0) y0_0 = -1.0/3**(1.0/3.0)/gamma(1.0/3.0) y0 = [y0_0, y1_0] def func(y, t): return [t*y[1],y[0]]

14

Chapter 1. SciPy Tutorial

Brent’s method fminbound. The simplex algorithm requires only function evaluations and is a good choice for simple minimization problems.optimize) The scipy. y0. Curve ﬁtting (curve_fit) 4. 0.5 Optimization (scipy. However. return [[0. Global (brute-force) optimization routines (e..[1. fmin_ncg: Newton Conjugate Gradient.0]] >>> >>> >>> >>> >>> x = arange(0.355028 0.optimize (can also be found by help(scipy.293658 0.5. Release 0.278806] 0.355028 0. Multivariate equation system solvers (fsolve) 6. An detailed listing is available: scipy. leastsq: Levenberg-Marquardt.308763 0. fmin_bfgs: BFGS.optimize) 15 .339511 0. t) y2 = odeint(func.t].293658 0.278806] 1.. several examples demonstrate their basic usage.1] [ 0.10. 1. newton_krylov) Below.293658 0.0rc1 >>> def gradient(y. it may take longer to ﬁnd the minimum. The module contains: 1.t): .optimize)).SciPy Reference Guide. and newton) 5.308763 0.g. 2..g. 2 The minimum value of this function is 0 which is achieved when xi = 1. y0.1 Nelder-Mead Simplex algorithm (fmin) The simplex algorithm is probably the simplest way to minimize a fairly well-behaved function.0. fmin: Nelder-Mead simplex.324067 0.324067 0.355028 0. This minimum can be found using the fmin routine as shown in the example below: 1.5. Optimization (scipy.g. t. Dfun=gradient) >>> print ychk[:36:6] [ 0. anneal) 3.308763 0.1] [ 0.g. Scalar function minimizers and root ﬁnders (e.339511 0. fmin_cobyla: COBYLA). because it does not use any gradient evaluations.339511 0..optimize package provides several commonly used optimization algorithms.324068 >>> print y[:36:6.4. To demonstrate the minimization function consider the problem of minimizing the Rosenbrock function of N variables: N −1 f (x) = i=1 100 xi − x2 i−1 2 + (1 − xi−1 ) . Unconstrained and constrained minimization and least-squares algorithms (e. Large-scale multivariate equation system solvers (e.01) t = x ychk = airy(x)[0] y = odeint(func.278806] >>> print y2[:36:6..

j − 2xi−1 δi−1. der[-1] = 200*(x[-1]-x[-2]**2) .2 Broyden-Fletcher-Goldfarb-Shanno algorithm (fmin_bfgs) In order to converge more quickly to the solution. """The Rosenbrock function""" .0*(x[1:]-x[:-1]**2. SciPy Tutorial .. The gradient of the Rosenbrock function is the vector: ∂f ∂xj N = i=1 200 xi − x2 i−1 (δi. Special cases are ∂f ∂x0 ∂f ∂xN −1 = = −400x0 x1 − x2 − 2 (1 − x0 ) . 1..9... 1.. To demonstrate this algorithm..... der[1:-1] = 200*(xm-xm_m1**2) .0)**2. the Rosenbrock function is again used.0rc1 >>> from scipy. If the gradient is not given by the user.. der[0] = -400*x[0]*(x[1]-x[0]**2) .j ) − 2 (1 − xi−1 ) δi−1.. 1... Current function value: 0. 2 200 xj − x2 j−1 − 400xj xj+1 − xj − 2 (1 − xj ) .optimize import fmin >>> def rosen(x): . this routine uses the gradient of the objective function. xtol=1e-8) Optimization terminated successfully.0) >>> x0 = [1. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method typically requires fewer function calls than the simplex algorithm even when the gradient must be estimated. 0.xm**2)*xm . return der The calling signature for the BFGS minimization algorithm is similar to fmin with the addition of the fprime argument.SciPy Reference Guide..000000 Iterations: 339 Function evaluations: 571 >>> print xopt [ 1. x0.2*(1-xm) . then it is estimated using ﬁrst-differences.0 + (1-x[:-1])**2.. xm_p1 = x[2:] .2] >>> xopt = fmin(rosen.3.optimize import fmin_bfgs 16 Chapter 1. 1. 1. Release 0. xm_m1 = x[:-2] . An example usage of fmin_bfgs is shown in the following example which minimizes the Rosenbrock function. 1. N A Python function which computes this gradient is constructed by the code-segment: >>> def rosen_der(x): . 1.] Another optimization algorithm that needs only function calls to ﬁnd the minimum is Powell’s method available as fmin_powell.8. der = zeros_like(x) .400*(xm_p1 . = This expression is valid for the interior derivatives.j .. >>> from scipy. 0.. xm = x[1:-1] ...7.2*(1-x[0]) . return sum(100..10.5. 0 200 xN −1 − x2 −2 .

0. fprime=rosen_der) Optimization terminated successfully.optimize) if i.9. 1. 0 = −400x0 . Optimization (scipy.2] >>> xopt = fmin_bfgs(rosen.3 Newton-Conjugate-Gradient (fmin_ncg) The method which requires the fewest function calls and is therefore often the fastest method to minimize functions of many variables is fmin_ncg. i ∂xi ∂xj = 202 + 1200x2 − 400xi+1 δi.j xi+1 − x2 + 2δi.j − 400xi−1 δi−1. i ∂2f ∂x2 0 2 2 ∂ f ∂ f = ∂x0 ∂x1 ∂x1 ∂x0 ∂2f ∂2f = ∂xN −1 ∂xN −2 ∂xN −2 ∂xN −1 ∂2f ∂x2 −1 N For example.5. = −400xN −2 .3. the Hessian when N = 5 is 1200x2 − 400x1 + 2 −400x0 0 −400x0 202 + 1200x2 − 400x2 1 0 −400x1 H= 0 0 0 1. resulting in xopt = x0 − H−1 f. x0.8. or a function to compute the product of the Hessian with an arbitrary vector. As a result.0rc1 >>> x0 = [1. The Hessian matrix itself does not need to be constructed. 0. Release 0. = 200. If the Hessian is positive deﬁnite then the local minimum of this function can be found by setting the gradient of the quadratic form to zero.j .j − 2xi−1 δi−1. 1. a function which computes the Hessian must be provided. j ∈ [0. 1.10. 1. The inverse of the Hessian is evaluted using the conjugate-gradient method.5. 0 −400x1 202 + 1200x2 − 400x3 2 −400x2 0 0 0 −400x2 202 + 1200x2 − 400x4 3 −400x3 0 0 0 −400x3 200 17 . Newton’s method is based on ﬁtting the function locally to a quadratic form: 1 T f (x) ≈ f (x0 ) + f (x0 ) · (x − x0 ) + (x − x0 ) H (x0 ) (x − x0 ) . Other non-zero entries of the matrix are = 1200x2 − 400x1 + 2. 1. j ∈ [1. An example of employing this method to minimizing the Rosenbrock function is given below.j − 2xi δi. Full Hessian example: The Hessian of the Rosenbrock function is ∂2f Hij = = 200 (δi.j − 400xi δi+1.7. N − 1] deﬁning the N × N matrix. This method is a modiﬁed Newton’s method and uses a conjugate gradient algorithm to (approximately) invert the local Hessian. the user can provide either a function to compute the Hessian matrix. 1. . only a vector which is the product of the Hessian with an arbitrary vector needs to be available to the minimization routine. Current function value: 0. To take full advantage of the NewtonCG method.] 1.000000 Iterations: 53 Function evaluations: 65 Gradient evaluations: 65 >>> print xopt [ 1.j .j ) − 400δi. 2 where H (x0 ) is a matrix of second-derivatives (the Hessian).j ) − 400xi (δi+1. N − 2] with i.SciPy Reference Guide.

1. diagonal = zeros_like(x) .. x = asarray(x) . In this case.. return H >>> x0 = [1. 0. The Newton-CG algorithm only needs the product of the Hessian times an arbitrary vector. 18 Chapter 1. −400xi−1 pi−1 + 202 + 1200x2 − 400xi+1 pi − 400xi pi+1 H (x) p = i ...... return Hp compute.diag(400*x[:-1]..optimize import fmin_ncg >>> def rosen_hess(x): ... diagonal[1:-1] = 202 + 1200*x[1:-1]**2 . avextol=1e-8) Optimization terminated successfully.-1) .] Hessian product example: For larger minimization problems..400*x[0]*p[1] .10. Current function value: 0.. storing the entire Hessian matrix can consume considerable time and memory.9.2] >>> xopt = fmin_ncg(rosen.. 1.. . diagonal[-1] = 200 ..8. Hp[1:-1] = -400*x[:-2]*p[:-2]+(202+1200*x[1:-1]**2-400*x[2:])*p[1:-1] \ . The fhess_p function should take the minimization vector as the ﬁrst argument and the arbitrary vector as the second argument. diagonal[0] = 1200*x[0]-400*x[1]+2 . 1. the product of the Rosenbrock Hessian with an arbitrary vector is not difﬁcult to arbitrary vector.optimize import fmin_ncg >>> def rosen_hess_p(x. fhess=rosen_hess.3. As a result. the user can supply code to compute this product rather than the full Hessian by setting the fhess_p keyword to the desired function.0rc1 The code which computes this Hessian along with the code to minimize the function using fmin_ncg is shown in the following example: >>> from scipy. H = diag(-400*x[:-1].400*x[1] + 2)*p[0] . SciPy Tutorial . . x0.SciPy Reference Guide..1) . 0. ...7... Hp[-1] = -400*x[-2]*p[-2] + 200*p[-1] . using Newton-CG with the hessian product option is probably the fastest way to minimize the function. Any extra arguments passed to the function to be minimized will also be passed to this function. If p is the . x = asarray(x) .. H = H + diag(diagonal) .. -400*x[1:-1]*p[2:] . Hp[0] = (1200*x[0]**2 . then H (x) p has elements: 1200x2 − 400x1 + 2 p0 − 400x0 p1 0 . Release 0.....400*x[2:] . 1..000000 Iterations: 23 Function evaluations: 26 Gradient evaluations: 23 Hessian evaluations: 23 >>> print xopt [ 1. −400xN −2 pN −2 + 200pN −1 Code which makes use of the fhess_p keyword to minimize the Rosenbrock function using fmin_ncg follows: >>> from scipy. If possible.. .p): . 1. 1... rosen_der. Hp = zeros_like(x) .

For example. 1. Current function value: 0. 1.9. rosen_der. p) .8. An objective function to pass to any of the previous minization algorithms to obtain a least-squares ﬁt is. Release 0.7. The residual vector is ei = |yi − A sin (2πkxi + θ)| . 1. the least-squares ﬁt ˆ ˆ ˆ routine can be used to ﬁnd the best-ﬁt parameters A. y.. return err >>> def peval(x. err = y-A*sin(2*pi*k*x+theta) . A common method for determining which parameter vector gives the best ﬁt to the data is to minimize the sum of squares of the residuals.] 1.5.k.5.. yi } to a known model. suppose it is desired to ﬁt a set of data {xi .2] >>> xopt = fmin_ncg(rosen.theta = 10. A. and θ are unknown. xi ) = yi − f (xi . N −1 J (p) = i=0 e2 (p) . avextol=1e-8) Optimization terminated successfully. If the Jacobian is not provided.k.6e-2.. Suppose it is believed some measured data follow a sinusoidal pattern yi = A sin (2πkxi + θ) where the parameters A.0rc1 >>> x0 = [1. fhess_p=rosen_hess_p. y = f (x. x0. pi/6 y_true = A*sin(2*pi*k*x+theta) y_meas = y_true + 2*random. p) where p is a vector of parameters for the model that need to be found. An example should clarify the usage. This is shown in the following example: >>> >>> >>> >>> >>> from numpy import * x = arange(0.theta = p .SciPy Reference Guide.. The residual is usually deﬁned for each observed data-point as ei (p.000000 Iterations: 22 Function evaluations: 25 Gradient evaluations: 22 Hessian evaluations: 54 >>> print xopt [ 1. it is estimated..optimize) 19 . The user is also encouraged to provide the Jacobian matrix of the function (with derivatives down the columns or across the rows).. 0. yi . θ. Optimization (scipy.6e-2/30) A. k . return p[0]*sin(2*pi*p[1]*x+p[2]) 1.. By deﬁning a function to compute the residuals and (selecting an appropriate starting position).randn(len(x)) >>> def residuals(p. 1. 1. 1. 1. i The leastsq algorithm performs this squaring and summing of the residuals automatically.10. x): .. It takes as an input argument the vector function e (p) and returns the value of p which minimizes J (p) = eT e directly. k.3. p): .0/3e-2. 0.4 Least-square ﬁtting (leastsq) All of the previously-explained minimization procedures can be used to solve a least-squares problem provided the appropriate objective function is constructed.

2*x1**2.. 43.5236] >>> >>> >>> >>> >>> import matplotlib.3333 0. N. x1=1. """ 20 Chapter 1. XL ≤ x ≤ XU.x....01 Least-squares fit to noisy data Fit Noisy True 0. ’Noisy’.9437 33. Cj (x) ≥ 0..plsq[0]). which has a maximum at x0=2. MEQ j = MEQ + 1. theta]) [ 10. j = 1.x0**2 .peval(x.show() 15 10 5 0 5 10 15 0.04 0.0472] >>> from scipy.legend([’Fit’. .10.3e-2.optimize import leastsq >>> plsq = leastsq(residuals. p0. Release 0.00 0.4 from Numerical Methods for Engineers by Steven Chapra and Raymond Canale.y_true) plt.y_meas.02 0. x)) >>> print plsq[0] [ 10. SciPy Tutorial .5. 1/2.x. . pi/3] >>> print array(p0) [ 8. The following script shows examples for how constraints can be speciﬁed..0rc1 >>> p0 = [8..4783 1.5 Sequential Least-square ﬁtting with constraints (fmin_slsqp) This module implements the Sequential Least SQuares Programming optimization algorithm (SLSQP). .’o’. ’True’]) plt.5834] >>> print array([A.title(’Least-squares fit to noisy data’) plt. args=(y_meas. This example maximizes the function f(x) = 2*x0*x1 + 2*x0 . k. """ This script tests fmin_slsqp using Example 14.3605 0. min F (x) subject to Cj (X) = 0.05 0.. M I = 1.06 1. 33.SciPy Reference Guide.pyplot as plt plt.03 0..plot(x.

optimize) 21 .0*(x[0]**2.0. Release 0. args : tuple First element of args is a multiplier for f.4*x[1]) return array([ dfdx0.0 ]) from time import time print "Unbounded optimization.0).*args): """ Lefthandside of the equality constraint """ return array([ x[0]**3-x[1] ]) def test_ieqcons(x.2*y**2‘‘.0rc1 from scipy.optimize import fmin_slsqp from numpy import array def testfunc(x. Returns ------res : float The result." 1. dfdx1 ]) def test_eqcons(x. Optimization (scipy.*args): """ Lefthandside of inequality constraint """ return array([ x[1]-1 ]) def test_fprime_eqcons(x.10.SciPy Reference Guide. it is nessessary to multiply the objective function by -1 to achieve the desired solution.x[0]**2 . equal to ‘‘2*x*y + 2*x .0 return sign*(2*x[0]*x[1] + 2*x[0] .2*x[1]**2) def testfunc_deriv(x. *args): """ Parameters ---------d : list A list of two elements. returning a numpy array representing df/dx and df/dy """ try: sign = args[0] except: sign = 1.*args): """ First derivative of inequality constraint """ return array([ 0.0 dfdx0 = sign*(-2*x[0] + 2*x[1] + 2) dfdx1 = sign*(2*x[0] . -1.5. where d[0] represents x and d[1] represents y in the following equation.x**2 . Since the objective function should be maximized.*args): """ First derivative of equality constraint """ return array([ 3.*args): """ This is the derivative of testfunc. 1.0 ]) def test_fprime_ieqcons(x. and the scipy optimizers can only minimize functions. """ try: sign = args[0] except: sign = 1.

0.1." t0 = time() result = fmin_slsqp(testfunc.0.0." print "Derivatives of constraints approximated. result.). fprime=testfunc_deriv.0.0rc1 print "Derivatives of objective function approximated. args=(-1. SciPy Tutorial ." t0 = time() result = fmin_slsqp(testfunc.0]. "ms" print "Results". "ms" print "Results".1. args: x[0]-x[1] ]. 1000*(time()-t0).0. full_output=1) print "Elapsed time:".1.0].SciPy Reference Guide. args: x[0]-. full_output=1) print "Elapsed time:". eqcons=[lambda x. [-1. "\n\n" print "Bound optimization (equality and inequality constraints).0]." print "Derivatives of objective function provided." t0 = time() result = fmin_slsqp(testfunc." print "Constraints implemented via lambda function. fprime=testfunc_deriv.0.).1." print "Derivatives of objective function approximated. "\n\n" print "Bound optimization (equality and inequality constraints). "ms" print "Results". [-1. iprint=2.)." t0 = time() result = fmin_slsqp(testfunc. args=(-1." print "Derivatives of constraints approximated. fprime=testfunc_deriv." print "Derivatives of objective function provided.5]. 1000*(time()-t0). "\n\n" print "Bound optimization (equality constraints). "\n\n" print "Unbounded optimization. result. f_eqcons=test_eqcons. "\n\n" print "Bound optimization (equality constraints). iprint=2.1.10.).).0]." t0 = time() result = fmin_slsqp(testfunc. full_output=1) print "Elapsed time:". iprint=2. args=(-1. result. "ms" print "Results".0.0.0]. args=(-1.0. 1000*(time()-t0). result.0]. [-1. iprint=2. full_output=1) print "Elapsed time:". 1000*(time()-t0). iprint=2. result." 22 Chapter 1." print "Constraints implemented via function.0." print "Constraints implemented via lambda. f_ieqcons=test_ieqcons.0. args: x[0]-x[1] ]." print "Derivatives of objective function provided. Release 0. fprime=testfunc_deriv.[-1. full_output=1) print "Elapsed time:"." print "Derivatives of constraints approximated. 1000*(time()-t0). eqcons=[lambda x. args=(-1." print "Constraints implemented via lambda.1. "ms" print "Results". [-1. ieqcons=[lambda x." print "Constraints implemented via function.0. "\n\n" print "Bound optimization (equality and inequality constraints). [-1. full_output=1) print "Elapsed time:". args: x[0]-x[1] ]." print "Derivatives of constraints approximated. iprint=2. eqcons=[lambda x." t0 = time() result = fmin_slsqp(testfunc. args=(-1.). 1000*(time()-t0). result." print "Derivatives of objective function provided. "ms" print "Results".

7) >>> print xmin 5. "ms" print "Results".0rc1 print "All derivatives provided.33144184241 1. c) such that f (a) > f (b) < f (c) and a < b < c . If these two starting points are not provided 0 and 1 will be used (this may not be the right choice for your function and result in an unexpected minimum being returned). For example. "\n\n" 1.1.optimize) = = 4." t0 = time() result = fmin_slsqp(testfunc. then alternatively two starting points can be chosen and a bracket will be found from these points using a simple marching algorithm. 7] as a constraint.7 Root ﬁnding Sets of equations To ﬁnd the roots of a polynomial. the following example ﬁnds the roots of the single-variable transcendental equation x + 2 cos (x) = 0. 1000*(time()-t0). there are constraints that can be placed on the solution space before minimization occurs. 4. to ﬁnd the minimum of J1 (x) near x = 5 . full_output=1) print "Elapsed time:". For example. and the set of non-linear equations x0 cos (x1 ) x0 x1 − x1 1. In these circumstances. To ﬁnd a root of a set of non-linear equations.5. The result is xmin = 5. Very often. args=(-1.3314 : >>> from scipy. fminbound can be called using the interval [4. f_ieqcons=test_ieqcons.SciPy Reference Guide. fprime_ieqcons=test_fprime_ieqcons. The brent method uses Brent’s algorithm for locating a minimum. The interval constraint allows the minimization to occur only between two ﬁxed endpoints. Unconstrained minimization (brent) There are actually two methods that can be used to minimize a scalar function (brent and golden). iprint=2. b. Release 0. the command fsolve is needed.10. other optimization techniques have been developed that can work faster. Bounded minimization (fminbound) Thus far all of the minimization routines described have been unconstrained minimization routines.5.0]. but golden is included only for academic purposes and should rarely be used. 23 . 5.0. f_eqcons=test_eqcons.[-1. fprime=testfunc_deriv. however.6 Scalar function minimizers Often only the minimum of a scalar function is needed (a scalar function is one that takes a scalar as input and returns a scalar output). If this is not given. Optimization (scipy. The fminbound function is an example of a constrained minimization procedure that provides a rudimentary interval constraint for scalar functions. result. A bracket is a triple (a.0.5. fprime_eqcons=test_fprime_eqcons. the command roots is useful. Optimally a bracket should be given which contains the minimum desired.optimize import fminbound >>> xmin = fminbound(j1.).special import j1 >>> from scipy.

Consider for instance the following problem: we need to solve the following integrodifferential equation on the square [0. Now.90841421] Scalar function root ﬁnding If one has a single-variable equation.. In general brentq is the best choice. 1]: 1 2 (∂x 1 2 + 2 ∂y )P +5 0 0 cosh(P ) dx dy =0 with the boundary condition P (x. return out >>> from scipy.5. but the other methods may be useful in certain circumstances or for academic purposes.optimize. This can be done by approximating the continuous function P by its values on a grid.. where P is a vector of length Nx Ny . there are four different root ﬁnder algorithms that can be tried. The problem is then equivalent to ﬁnding the root of some function residual(P).5) .SciPy Reference Guide. y) + P (x − h. Release 0.m ≈ P (nh. y))/h . SciPy Tutorial . broyden2. Each of these root ﬁnding algorithms requires the endpoints of an interval where a root is suspected (because the function changes signs). 1. or anderson.50409711 0.. Pn. because Nx Ny can be large. Clearly the ﬁxed point of g is the root of f (x) = g (x) − x. with a small 2 grid spacing h.5041. This becomes rather inefﬁcent when N grows. y) − 2 2P (x. 0.4] .x[1] . for example newton_krylov.optimize import fsolve >>> x0 = fsolve(func. as it needs to calculate and invert a dense N x N Jacobian matrix on every Newton step.3) >>> print x0 -1. The routine fixed_point provides a simple iterative method using Aitkens sequence acceleration to estimate the ﬁxed point of g given a starting point. 1]) >>> print x02 [ 6. forms an approximation for it. 1) = 1 on the upper edge and P = 0 elsewhere on the boundary of the square. Equivalently. out = [x[0]*cos(x[1]) . [1. The derivatives and integrals can then be approximated.0299 and x0 = 6.8 Root ﬁnding: Large problems The fsolve function cannot deal with a very large number of variables (N). return x + 2*cos(x) >>> def func2(x): .0rc1 The results are x = −1.. the root of f is the ﬁxed_point of g (x) = f (x) + x.. which instead of computing the Jacobian matrix exactly.9084 . 1] × [0. 24 Chapter 1. mh). out.append(x[1]*x[0] .02986652932 >>> x02 = fsolve(func2. These use what is known as the inexact Newton method. fsolve will take a long time to solve this problem.10. x1 = 0.. >>> def func(x): . Fixed-point solving A problem closely related to ﬁnding the zeros of a function is the problem of ﬁnding a ﬁxed-point of a function. The solution can however be found using one of the large-scale solvers in scipy. y) ≈ (P (x + h.. A ﬁxed point of a function is the point at which evaluation of the function returns the point: g (x) = x. for instance ∂x P (x..

mean()**2 # solve guess = zeros((nx.colorbar() plt. max_rank=50.0] = (P[:.2*P[:. zeros_like.-1] = (P_top . guess. abs(residual(sol)).-2])/hy/hy return d2x + d2y + 5*cosh(P).2*P[0] + P_left)/hx/hx d2x[-1] = (P_right ./(nx-1).optimize) 25 . 1.1:-1] = (P[:./(ny-1) P_left.optimize import newton_krylov from numpy import cosh.max() # visualize import matplotlib. ny).2*P[1:-1] + P[:-2]) / hx/hx d2x[0] = (P[1] . 0 def residual(P): d2x = zeros_like(P) d2y = zeros_like(P) d2x[1:-1] = (P[2:] .:-2])/hy/hy d2y[:.2*P[-1] + P[-2])/hx/hx d2y[:. verbose=1) print ’Residual’. zeros # parameters nx. y. verbose=1) #sol = broyden2(residual.1:-1] + P[:. ny = 75. 0:1:(ny*1j)] plt. float) sol = newton_krylov(residual. guess.2*P[:. verbose=1) #sol = anderson(residual.pcolor(x. P_right = 0.1] .2:] .SciPy Reference Guide. 0 P_top. 75 hx.0] + P_bottom)/hy/hy d2y[:.pyplot as plt x.2*P[:.10. Optimization (scipy.-1] + P[:. M=10.5. P_bottom = 1. hy = 1. Release 0. guess.show() 1. sol) plt.0rc1 The problem we have can now be solved as follows: import numpy as np from scipy. mgrid. y = mgrid[0:1:(nx*1j).

N. Jij = ∂fi .LinearOperator instance. we note that the function to solve consists of two parts: the ﬁrst one is 2 2 application of the Laplace operator. . −1 So we are content to take M ≈ J1 and hope for the best. and since all of it entries are nonzero. When looking for the zero of the functions fi (x) = 0. We can actually easily compute the Jacobian corresponding to the Laplace operator part: we know that in one dimension −2 1 0 0··· 1 1 −2 1 0 · · · 2 = h−2 L ∂x ≈ 2 x 1 −2 1 · · · hx 0 . and the second is the integral..4 0. J1 on the other hand is a relatively simple matrix.0 0. i = 1. it will be difﬁcult to invert.0 0. For the problem in the previous section.15 Still too slow? Preconditioning. SciPy Tutorial .. ∂xj If you have an approximation for the inverse matrix M ≈ J −1 . the newton_krylov solver spends most of its time inverting the Jacobian matrix.90 0.0 0.. The idea is that instead of solving Js = y one solves M Js = M y: since matrix M J is “closer” to the identity matrix than J is. It can be a (sparse) matrix or a scipy.SciPy Reference Guide.2 0.8 1.sparse.splu (or the inverse can be approximated by scipy. the equation should be easier for the Krylov method to deal with.linalg. so that the whole 2-D operator is represented by 2 2 J1 = ∂x + ∂y h−2 L ⊗ I + h−2 I ⊗ L x y The matrix J2 of the Jacobian corresponding to the integral is more difﬁcult to calculate.6 0.6 0. −1 In the example below. you can use it for preconditioning the linear inversion problem.. we use the preconditioner M = J1 .0 0.. Release 0. 26 Chapter 1.45 0.sparse.sparse.linalg.linalg. [∂x + ∂y ]P .8 0.75 0.spilu).0rc1 1. The matrix M can be passed to newton_krylov as the inner_M parameter. and can be inverted by scipy.2 0.4 0.30 0.10.60 0. 2.

We need to find its inverse ‘M‘ -# however. since an approximate inverse is enough. We need to wrap it into # a LinearOperator before it can be passed to the Krylov methods: M = LinearOperator(shape=(nx*ny.:] = -2/hx/hx diags_x[2.2*P[:.SciPy Reference Guide. we can use # the *incomplete LU* decomposition J1_ilu = spilu(J1) # This returns an object with a method . nx.1].:] = -2/hy/hy diags_y[2.0] + P_bottom)/hy/hy 1. matvec=J1_ilu./(nx-1).:] = 1/hy/hy Ly = spdiags(diags_y.2:] .10.2*P[:. Ly) # Now we have the matrix ‘J_1‘. nx)) diags_x[0. spkron from scipy./(ny-1) P_left.0. ny) J1 = spkron(Lx.1]. eye # parameters nx.0. [-1. LinearOperator from numpy import cosh. mgrid.2*P[0] + P_left)/hx/hx d2x[-1] = (P_right .0] = (P[:.1] . 0 P_top. 75 hx. nx*ny). ny = 75.sparse. ny)) diags_y[0.linalg import spilu. hy = 1. nx) diags_y = zeros((3.5. zeros. P_bottom = 1.2*P[1:-1] + P[:-2])/hx/hx d2x[0] = (P[1] .:] = 1/hx/hx Lx = spdiags(diags_x.2*P[-1] + P[-2])/hx/hx d2y[:. zeros_like. eye(ny)) + spkron(eye(nx).solve() that evaluates # the corresponding matrix-vector product. 0 def get_preconditioner(): """Compute the preconditioner M""" diags_x = zeros((3. Release 0. [-1.:] = 1/hx/hx diags_x[1.1:-1] + P[:. ny. P_right = 0.sparse import spdiags. 1.optimize) 27 .solve) return M def solve(preconditioning=True): """Compute the solution""" count = [0] def residual(P): count[0] += 1 d2x = zeros_like(P) d2y = zeros_like(P) d2x[1:-1] = (P[2:] .1:-1] = (P[:.:-2])/hy/hy d2y[:.0rc1 import numpy as np from scipy.:] = 1/hy/hy diags_y[1. Optimization (scipy.optimize import newton_krylov from scipy.

SciPy Reference Guide.00173256 Residual 3.00110945 2: |F(x)| = 0. step 1.29603061195e-11 Evaluations 77 Using a preconditioner reduced the number of evaluations of the residual function by a factor of 4. tol 2. tol 0. step 1. tol 0. tol 7. y = mgrid[0:1:(nx*1j).44604e-06 4: |F(x)| = 1. step 1. step 1. guess.00698e-09. tol 0.00149362 3: |F(x)| = 0. step 1.clim(0. 1) plt. step 1. step 1.mean()**2 # preconditioner if preconditioning: M = get_preconditioner() else: M = None # solve guess = zeros((nx.7424e-06. tol 0.614.87308e-12 Residual 9. y.2*P[:.000153671. For problems 28 Chapter 1. step 1. count[0] return sol def main(): sol = solve(preconditioning=True) # visualize import matplotlib. step 1. tol 7.195942.000257947 1: |F(x)| = 345. step 1. 0:1:(ny*1j)] plt. step 1.00344341.912.00645373 7: |F(x)| = 0.166755 2: |F(x)| = 139.00179246 8: |F(x)| = 6.-1] = (P_top .3682. tol 0.pyplot as plt x.57078908664e-07 Evaluations 317 and then with preconditioning: 0: |F(x)| = 136. step 1.pcolor(x. SciPy Tutorial .159. verbose=1. tol 0.0348109 4: |F(x)| = 1.clf() plt. inner_M=M) print ’Residual’. Release 0.00139451 6: |F(x)| = 0. tol 0.49599e-06 1: |F(x)| = 4. ny).-1] + P[:.0rc1 d2y[:.000563597.colorbar() plt.0406634. tol 0. ﬁrst without preconditioning: 0: |F(x)| = 803. step 1.-2])/hy/hy return d2x + d2y + 5*cosh(P). tol 0. abs(residual(sol)).max() print ’Evaluations’.993. step 1.show() if __name__ == "__main__": main() Resulting run.145657 3: |F(x)| = 27. sol) plt.80983. tol 0.10.03303. tol 0.00128227 5: |F(x)| = 0. float) sol = newton_krylov(residual.

offering several interpolation methods.interpolate import interp1d >>> >>> >>> >>> x = np. Behavior at the boundary can be speciﬁed at instantiation time.SciPy Reference Guide.10. science. but there is a lot more depth to this topic than is shown here. Release 0. There are both procedural and object-oriented interfaces for the FITPACK library. and higher dimensions: • A class representing an interpolant (interp1d) in 1-D. Object-oriented interface for the underlying routines is also available. 10) y = np.splXXX) * Spline interpolation in 1-d: Object-oriented (UnivariateSpline) * Two-dimensional spline representation: Procedural (bisplrep) * Two-dimensional spline representation: Object-oriented (BivariateSpline) – Using radial basis functions for smoothing/interpolation * 1-d Example * 2-d Example There are several general interpolation facilities available in SciPy. 3. y.6. 2. .interpolate) Contents • Interpolation (scipy. y) f2 = interp1d(x. good preconditioning can be crucial — it can even decide whether the problem is solvable in practice or not.1 1-D interpolation (interp1d) The interp1d class in scipy. The instance of this class deﬁnes a __call__ method and can therefore by treated like a function which interpolates between known data values to obtain unknown values (it also has a docstring for help).. Interpolation (scipy.interpolate) – 1-D interpolation (interp1d) – Multivariate data interpolation (griddata) – Spline interpolation * Spline interpolation in 1-d: Procedural (interpolate. and industry. Preconditioning is an art. based on the FORTRAN library FITPACK. for data in 1. Here. for linear and cubic spline interpolation: >>> from scipy.0) f = interp1d(x. The following example demonstrates its use.linspace(0.0rc1 where the residual is expensive to compute.).and 2-dimensional (smoothed) cubic-spline interpolation. 10. An instance of this class is created by passing the 1-d vectors comprising the data. 1. References Some further reading and related software: 1. • Interpolation using Radial Basis Functions. • Convenience function griddata offering a simple interface to interpolation in N dimensions (N = 1. kind=’cubic’) 1.exp(-x/3.6 Interpolation (scipy.6. • Functions for 1.. we were lucky in making a simple choice that worked reasonably well. 2. 4.interpolate) 29 .interpolate is a convenient method to create a function based on ﬁxed data points which can be evaluated anywhere within the domain deﬁned by the given data using linear interpolation.

y): >>> return x*(1-x)*np. points[:.0 0 2 4 6 8 data linear cubic 10 1.pi*x) * np.10. grid_y).xnew. extent=(0.plot(x.6 0. 0:1:200j] but we only know its values at 1000 data points: >>> points = np. grid_y). method=’linear’) grid_z2 = griddata(points.legend([’data’. Suppose we want to interpolate the 2-D function >>> def func(x. grid_y = np.sin(4*np. values. Release 0.0 0. f2(xnew).’-’.cos(4*np.’--’) plt. ’cubic’]. grid_y). grid_y). method=’nearest’) grid_z1 = griddata(points.1).pyplot as plt >>> plt. values.mgrid[0:1:100j. (grid_x. 10.0.rand(1000. SciPy Tutorial .1]) This can be done with griddata – below we try out all of the interpolation methods: >>> >>> >>> >>> from scipy. y[i]) that do not form a regular grid.4 0. y) you only know the values at points (x[i].T. (grid_x. 1]x[0. (grid_x.SciPy Reference Guide. xnew.show() 1. but for this smooth function the piecewise cubic interpolant gives the best results: >>> import matplotlib.y.8 0. origin=’lower’) 30 Chapter 1.linspace(0. loc=’best’) plt.2 0.f(xnew).0]. ’linear’. 1] >>> grid_x.0rc1 >>> >>> >>> >>> >>> xnew = np. for instance for an underlying function f(x.1.6.pi*y**2)**2 on a grid in [0. 40) import matplotlib.imshow(func(grid_x.random. 2) >>> values = func(points[:. method=’cubic’) One can see that the exact result is reproduced by all of the methods to some degree.subplot(221) >>> plt.interpolate import griddata grid_z0 = griddata(points.2 Multivariate data interpolation (griddata) Suppose you have multidimensional data.’o’. values.pyplot as plt plt.

6) plt.gcf().0 0.8 0.imshow(grid_z1.0.0 0.4 0.plot(points[:.1.10.6 0.4 0.0].4 0.2 0.1.title(’Linear’) plt.2 0.8 0.0 0.T.’.title(’Nearest’) plt.2 Nearest 0.0 0.0 1.T.0 Cubic 1.2 0. points[:.4 0.2 0.1). extent=(0.title(’Cubic’) plt.0 0.6 0.title(’Original’) plt.0.0 0. origin=’lower’) plt.8 1.8 0.T.4 0.SciPy Reference Guide.6 0. ms=1) plt.1).interpolate) 31 .4 0.subplot(223) plt. extent=(0.2 0.1.0 0.0 0. Release 0.6 0. Interpolation (scipy.0 0.4 0.6 0.2 Original 1.subplot(222) plt.0 0.0 0.1). ’k.2 0.0 0.8 1.0 Linear 1.set_size_inches(6.0rc1 >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> plt. origin=’lower’) plt.6.4 0.0 0.imshow(grid_z0.imshow(grid_z2.6 0. extent=(0.8 0.6 0.8 1.1].subplot(224) plt. origin=’lower’) plt.8 1.show() 1.0.6 0.

6.sin(x) tck = interpolate.axis([-0.figure() plt.’x’. These functions are demonstrated in the example that follows.x.2*np. The ﬁrst two arguments are the only ones required.s=0) xnew = np. spalde) at any point and the integral of the spline between any two points ( splint). s .np.6. c.y. if no smoothing is desired a value of s = 0 should be passed to the routines. which defaults to an equally-spaced monotonic sequence between 0 and 1 . The parameter variable is given with the keword argument.splXXX) Spline interpolation requires two essential steps: (1) a spline representation of the curve is computed. and these provide the x and y components of the curve.pi/8) y = np.SciPy Reference Guide.3 Spline interpolation Spline interpolation in 1-d: Procedural (interpolate.np.show() 32 Chapter 1. containing the knot-points. k) . The keyword argument. there are two different ways to represent a curve and obtain (smoothing) spline coefﬁcients: directly and parametrically.2*np.10.sin(xnew). The default √ value of s is s = m − 2m where m is the number of data-points being ﬁt. >>> import numpy as np >>> import matplotlib.der=0) plt. The default output consists of two objects: a 3-tuple. (t. (t.2*np. Therefore.arange(0.y.tck. and (2) the spline is evaluated at the desired points.’b’) plt. u. The direct method ﬁnds the spline representation of a curve in a two.05. This input is a list of N -arrays representing the curve in N -dimensional space. the roots of the spline can be estimated ( sproot). and each array provides one component of the N -dimensional data point.xnew. containing the spline representation and the parameter variable u.-1. For this function only 1 input argument is required.pi/4. the coefﬁcients c and the order k of the spline.pi+np.05]) plt.title(’Cubic-spline interpolation’) plt. c.splrep(x. ’True’]) plt.xnew. Once the spline representation of the data has been determined.splev(xnew. For curves in N -dimensional space the function splprep allows deﬁning the curve parametrically. The normal output is a 3-tuple.05. but this can be changed with the input keyword. In addition.plot(x.pyplot as plt >>> from scipy import interpolate Cubic-spline >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> x = np.pi/50) ynew = interpolate. k) .legend([’Linear’.33. is used to specify the amount of smoothing to perform during the spline ﬁt. k. t .ynew.dimensional plane using the function splrep. for cubic splines ( k = 3 ) with 8 or more knots.’Cubic Spline’. The length of each array is the number of curve points.pi.arange(0.y.0rc1 1. functions are available for evaluating the spline (splev) and its derivatives (splev. The default spline order is cubic. Release 0.1. SciPy Tutorial . In order to ﬁnd the spline representation.

’--’) plt.shape. Interpolation (scipy.cos(xnew).0 0 1 Cubic-spline interpolation Linear Cubic Spline True 2 3 4 5 6 Derivative of spline >>> >>> >>> >>> >>> >>> >>> yder = interpolate.1.0 0.der=1) plt.show() 1.5 1.tck.0 0.-1. Release 0.05.dtype) >>> for n in xrange(len(out)): >>> out[n] = interpolate.33.np.05]) plt.0 0.0rc1 1.plot(xnew.10.0 0 1 Derivative estimation from spline Cubic Spline True 2 3 4 5 6 Integral of spline >>> def integ(x.constant=-1): >>> x = np.zeros(x. ’True’]) plt.5 0.figure() plt.0 0.yder.title(’Derivative estimation from spline’) plt.tck.axis([-0.xnew.x[n].6.interpolate) 33 .legend([’Cubic Spline’.5 1.tck) >>> out += constant 1.6.splev(xnew.05.atleast_1d(x) >>> out = np. dtype=x.splint(0.SciPy Reference Guide.5 0.

arange(0.legend([’Cubic Spline’.arange(0.splprep([x..1416] Parametric spline >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> t = np.pi*t) tck.s=0) unew = np.5 0.10.sproot(tck) [ 0.title(’Spline of parametrically-defined curve’) plt.y.tck) plt.axis([-0.yint.1.xnew.05. ’True’]) plt.out[1]. ’True’]) plt.plot(x.’Cubic Spline’.’x’.np. 3.show() 1.0.show() 34 Chapter 1.x.figure() plt. SciPy Tutorial .SciPy Reference Guide.’--’) plt.cos(2*np.plot(xnew.figure() plt.1.1.tck) plt.pi*unew).’b’) plt.0 0.1.cos(2*np.05.-np.1.splev(unew.pi*t) y = np.pi*unew).5 1.05.0rc1 >>> >>> >>> >>> >>> >>> >>> >>> >>> return out yint = integ(xnew.legend([’Linear’.sin(2*np.sin(2*np. Release 0.title(’Integral estimation from spline’) plt.-1.0 0.cos(xnew).01) out = interpolate.05.1) x = np.6.05]) plt.01.-1.1.33.np.05]) plt.y.05.0 0 1 Integral estimation from spline Cubic Spline True 2 3 4 5 6 Roots of spline >>> print interpolate.y].u = interpolate.out[0].axis([-1.

and are created with the x and y components of the curve provided as arguments to the constructor. or change the character of the spline.0 Spline interpolation in 1-d: Object-oriented (UnivariateSpline) The spline-ﬁtting capabilities described above are also available via an objected-oriented interface.1.05]) 1. It is a subclass of UnivariateSpline that always passes through all points (equivalent to forcing the smoothing parameter to 0).0 0. and roots to be computed for the spline.5 1.2*np. returning the interpolated y-values.0 0.-1.5 0.plot(x.arange(0. This class is demonstrated in the example below. to interpolate in some domains and smooth in others. Release 0.pi/8) y = np.2*np.pi+np.’x’.pi/50) ynew = s(xnew) plt. the InterpolatedUnivariateSpline class is available. but rather a smoothing spline.pyplot as plt >>> from scipy import interpolate InterpolatedUnivariateSpline >>> >>> >>> >>> >>> >>> >>> >>> >>> x = np.6.SciPy Reference Guide.5 0. allowing the object to be called with the x-axis values at which the spline should be evaluated. The one dimensional splines are objects of the UnivariateSpline class.ynew.legend([’Linear’.sin(xnew). Interpolation (scipy. derivatives.xnew.sin(x) s = interpolate.05.’InterpolatedUnivariateSpline’.axis([-0. derivatives. with the same meaning as the s keyword of the splrep function described above.0 0.33. This results in a spline that has fewer knots than the number of data points.InterpolatedUnivariateSpline(x.y. The UnivariateSpline class can also be used to smooth data by providing a non-zero value of the smoothing parameter s.xnew.y.interpolate) 35 . This allows creation of customized splines with non-linear spacing.pi.y) xnew = np.figure() plt. This is shown in the example below for the subclass InterpolatedUnivariateSpline. >>> import numpy as np >>> import matplotlib.5 1.2*np.pi/4.arange(0.np.x. It allows the user to specify the number and location of internal knots as explicitly with the parameter t. allowing deﬁnite integrals.10. ’True’]) plt. The class deﬁnes __call__. and hence is no longer strictly an interpolating spline.0 Spline of parametrically-defined curve Linear Cubic Spline True 0. and roots methods are also available on UnivariateSpline objects.0rc1 1.6. If this is not desired.0 1.’b’) plt.05.np. The LSQUnivarateSpline is the other subclass of UnivarateSpline. The methods integral.

pi/2+.pi/2+.k=2) >>> ynew = s(xnew) >>> >>> >>> >>> >>> >>> plt.1.LSQUnivariateSpline(x.05]) plt. Release 0.legend([’Linear’.xnew.10.33.show() 1.5 0.0 0.’x’.sin(xnew).1.0 0.05.1. SciPy Tutorial .0 0.y.1.np.xnew.0 0 1 Spline with Specified Interior Knots Linear LSQUnivariateSpline True 2 3 4 5 6 36 Chapter 1.show() 1.figure() plt.title(’InterpolatedUnivariateSpline’) >>> plt.5 1.6.y.ynew.pi/2-.-1.5 1.3*np.np.SciPy Reference Guide.0 0 1 InterpolatedUnivariateSpline Linear InterpolatedUnivariateSpline True 2 3 4 5 6 LSQUnivarateSpline with non-uniform knots >>> t = [np.1] >>> s = interpolate.x.05.0rc1 >>> plt.5 0.y.0 0.3*np.title(’Spline with Specified Interior Knots’) plt.t.axis([-0.pi/2-. ’True’]) plt.’b’) plt.’LSQUnivariateSpline’.plot(x.

-1:1:20j] >>> z = (x+y)*np.y = np. ky] whose entries represent respectively.pyplot as plt Deﬁne function over sparse 20x20 grid >>> x.0 0.0 0. the fourth and ﬁfth arguments provide the orders of the partial derivative in the x and y direction respectively.5 1. can be used to change the amount of smoothing √ performed on the data while determining the appropriate spline.") plt.pcolor(x.0 Sparsely sampled function.05 0.10 0. y.0rc1 Two-dimensional spline representation: Procedural (bisplrep) For (smooth) spline-ﬁtting to a two dimensional surface. The keyword. c.exp(-6.05 0. The two dimensional interpolation commands are intended for use when interpolating a two dimensional function as shown in the example that follows.0 0. the function bisplev is required. 0. It is convenient to hold this list in a single object. and z which represent points on the surface z = f (x. The signal processing toolbox contains more appropriate algorithms for ﬁnding the spline representation of an image. >>> import numpy as np >>> from scipy import interpolate >>> import matplotlib.title("Sparsely sampled function. The default value is s = m − 2m where m is the number of data points in the x.mgrid[-1:1:20j. y.00 0. kx. This function takes as the ﬁrst two arguments two 1-D arrays whose cross-product speciﬁes the domain over which to evaluate the spline.y. As a result. s . ty.show() 1.20 0.colorbar() plt.10 0. and the order of the spline in each coordinate.0*(x*x+y*y)) >>> >>> >>> >>> >>> plt.0 1.5 1.SciPy Reference Guide.15 0.z) plt. the components of the knot positions. the function bisplrep is available. so that it can be passed easily to the function bisplev.15 0.5 0. then s = 0 should be passed to bisplrep . if no smoothing is desired.interpolate) 37 . Release 0. To evaluate the two-dimensional spline and it’s partial derivatives (up to the order of the spline).10. The number of output arguments and the number of dimensions of each argument is determined by the number of indexing objects passed in mgrid. (See also the ogrid command if the full-mesh is not needed). If desired. Interpolation (scipy. and z vectors.0 1.figure() plt. It is important to note that two dimensional interpolation should not be used to ﬁnd the spline representation of images. tck. The third argument is the tck list returned from bisplrep. The algorithm used is not amenable to large numbers of input points.20 0. This example uses the mgrid command in SciPy which is useful for deﬁning a “mesh-grid “in many dimensions. the coefﬁcients of the spline. y) . This function takes as required inputs the 1-D arrays x. The default output is a list [tx.6.5 0.

interpolate import Rbf.ynew = np.figure() plt.0 0. allowing objects to be instantiated that can be called to compute the spline value by passing in the two coordinates as the two arguments.linspace(0.sin(x) xi = np.") plt. SciPy Tutorial .6.15 0. 0.0 Interpolated function. 10.5 1.title("Interpolated function.tck) >>> >>> >>> >>> >>> plt.bisplrep(x.znew) plt.0].:].10 0.colorbar() plt.s=0) >>> znew = interpolate.10 0. >>> import numpy as np >>> from scipy. Release 0.ynew. 9) y = np.z.00 0.mgrid[-1:1:70j.15 0. 10.bisplev(xnew[:. but should be used with caution for extrapolation outside of the observed data range.ynew[0.show() 1.20 0.05 0.0 Two-dimensional spline representation: Object-oriented (BivariateSpline) The BivariateSpline class is the 2-dimensional analog of the UnivariateSpline class. 1.4 Using radial basis functions for smoothing/interpolation Radial basis functions can be used for smoothing/interpolating scattered data in n-dimensions.05 0.5 0. It and its subclasses implement the FITPACK functions described above in an object oriented fashion.10.0 1. InterpolatedUnivariateSpline >>> import matplotlib. 101) 38 Chapter 1.0rc1 Interpolate function over new 70x70 grid >>> xnew.pcolor(xnew.0 0.interpolate module.20 0.y.pyplot as plt >>> >>> >>> >>> # setup data x = np.5 0. 1-d Example This example compares the usage of the Rbf and UnivariateSpline classes from the scipy.-1:1:70j] >>> tck = interpolate.SciPy Reference Guide.linspace(0.0 0.5 1.

’bo’) plt. ’g’) plt.linspace(-2.5 0.6.0 0. ’r’) plt. 2. np.setup scattered data x = np. ’g’) plt. 1) plt.plot(xi.0 z = x*np.0 y = np.plot(xi. Interpolation (scipy.5 1.exp(-x**2-y**2) ti = np.plot(xi. 1. ti) 1. np.0 1.5 0.interpolate) 39 . yi.0-2. >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> import numpy as np from scipy.rand(100)*4.0 0 Interpolation using univariate spline Interpolation4 using RBF .pyplot as plt from matplotlib import cm # 2-d tests .rand(100)*4.10.title(’Interpolation using RBF . y) >>> yi = ius(xi) >>> >>> >>> >>> >>> plt.sin(xi).plot(xi. y) >>> fi = rbf(xi) >>> >>> >>> >>> >>> >>> plt.0rc1 >>> # use fitpack2 method >>> ius = InterpolatedUnivariateSpline(x.0.sin(xi). 1.plot(x.plot(x. YI = np.0. y.meshgrid(ti.5 1. 2) plt.subplot(2. ’bo’) plt. y.show() 1.subplot(2.6 multiquadrics 2 8 10 2 4 6 8 10 2-d Example This example shows how to interpolate scattered 2d data.title(’Interpolation using univariate spline’) >>> # use RBF method >>> rbf = Rbf(x.0 0. ’r’) plt.0-2. fi.multiquadrics’) plt. 100) XI.0 0. Release 0.random.SciPy Reference Guide.0 0 0.interpolate import Rbf import matplotlib.random.

2 0.3 0.5 0.0 0.5 1.3 0.0 1.0 0. 1) plt.0 1. ZI. 2. y.title(’RBF interpolation .pcolor(XI. SciPy Tutorial . 2) plt. Release 0. z.0 0.1 0.5 1.fftpack) Warning: This is currently a stub page Contents • Fourier Transforms (scipy.jet) plt.) plt.colorbar() 2.subplot(1.5 1.0 1. 2) plt. 1.1 0.ylim(-2.multiquadrics 2. YI) >>> >>> >>> >>> >>> >>> >>> >>> >>> # plot the result n = plt.SciPy Reference Guide. cmap=cm. y.0 RBF interpolation .5 2.0 0.5 0. 100.0rc1 >>> # use RBF >>> rbf = Rbf(x.7 Fourier Transforms (scipy.4 1.0 1.normalize(-2.10.5 2.xlim(-2. z.0 0.5 1.4 0.fftpack) – Fast Fourier transforms – One dimensional discrete Fourier transforms – Two and n dimensional discrete Fourier transforms – Discrete Cosine Transforms * type I * type II * type III * References – FFT convolution 40 Chapter 1.multiquadrics’) plt.. cmap=cm. epsilon=2) >>> ZI = rbf(XI. YI.2 0.0 0.jet) plt.scatter(x.

1.fftpack) 41 . we use the following (for norm=None): N −1 yk = 2 n=0 xn cos π(2n + 1)k 2N 0 ≤ k < N.7. Only None is supported as normalization mode for DCT-I. otherwise Which makes the corresponding matrix of coefﬁcients orthonormal (OO’ = Id). Release 0. it is called the discrete Fourier transform (DFT). we use the following (for norm=None): N −2 yk = x0 + (−1)k xN −1 + 2 n=1 xn cos πnk N −1 . Fourier Transforms (scipy. which was known to Gauss (1805) and was brought to light in its current form by Cooley and Tukey [CT]. norm=’ortho’) is equal to MATLAB dct(x). When both the function and its Fourier transform are replaced with discretized counterparts.2 One dimensional discrete Fourier transforms fft.SciPy Reference Guide.0rc1 Fourier analysis is fundamentally a method for expressing a function as a sum of periodic components.3 Two and n dimensional discrete Fourier transforms fft in more than one dimension 1.10. and ‘the’ Inverse DCT generally refers to DCT type 3. 1. For a single dimension array x. called the Fast Fourier Transform (FFT). only the ﬁrst 3 types are implemented in scipy. 0 ≤ k < N. and for recovering the signal from those components. Note also that the DCT-I is only supported for input size > 1 type II There are several deﬁnitions of the DCT-II. [NR] provide an accessible introduction to Fourier analysis and its applications. The DFT has become a mainstay of numerical computing in part because of a very fast algorithm for computing it.7. ‘The’ DCT generally refers to DCT type 2. yk is multiplied by a scaling factor f : f= 1/(4N ). ifft. dct(x. There are theoretically 8 types of the DCT [WP].4 Discrete Cosine Transforms Return the Discrete Cosine Transform [Mak] of arbitrary type sequence x. if k = 0 1/(2N ).7. If norm=’ortho’.7.7.1 Fast Fourier transforms 1. rfft. irfft 1. type I There are several deﬁnitions of the DCT-I. Press et al.

a limited set of ﬁlter design tools.sampling. the second-derivative of a spline is 1 x cj β o −j .fftpack. cj β o ∆x j In two dimensions with knot-spacing ∆x and ∆y .5 FFT convolution scipy. If the knot. and a few B-spline interpolation algorithms for one.8.and two-dimensional data. SciPy Tutorial .10.0rc1 type III There are several deﬁnitions.signal) The signal processing toolbox currently contains some ﬁltering functions. we use the following (for norm=None): N −1 yk = x0 + 2 n=1 xn cos πn(2k + 1) 2N 0 ≤ k < N. Unlike the general spline interpolation algorithms.convolve performs a convolution of two one-dimensional arrays in frequency domain. The orthonormalized DCT-III is exactly the inverse of the orthonormalized DCT-II. y) ≈ j k cjk β o x − j βo ∆x y −k .SciPy Reference Guide. x y (x) ≈ −j . allows the development of fast (inverse-ﬁltering) algorithms for determining the coefﬁcients. yn . References 1.1 B-splines A B-spline is an approximation of a continuous function over a ﬁnite. for norm=’ortho’: x0 1 yk = √ + √ N N N −1 xn cos n=1 πn(2k + 1) 2N 0 ≤ k < N. 1. ∆y In these expressions. up to a factor 2N. from sample-values.) which assume that the data samples are drawn from an underlying continuous function can be computed with relative ease from the spline coefﬁcients. then the B-spline approximation to a 1-dimensional function is the ﬁnite-basis expansion. the function representation is z (x. these algorithms can quickly ﬁnd the spline coefﬁcients for large images. Release 0. 1. While the B-spline algorithms could technically be placed under the interpolation category. β o (·) is the space-limited B-spline basis function of order.7. cj . re. or.points are equally spaced with spacing ∆x .domain in terms of B-spline coefﬁcients and knot points. they are included here because they only work with equally-spaced data and make heavy use of ﬁlter-theory and transfer-function formalism to provide a fast B-spline transform. y (x) = ∆x2 j ∆x 42 Chapter 1. For example. The advantage of representing a set of samples via B-spline basis functions is that continuous-domain operators (derivatives. The (unnormalized) DCT-III is the inverse of the (unnormalized) DCT-II. integral. o . To understand this section you will need to understand that a signal in SciPy is an array of real or complex numbers. etc.8 Signal Processing (scipy. The requirement of equallyspaced knot-points and equally-spaced data points.

cspline1d.processing sub package assume mirror-symmetric boundary conditions. The algorithms relating to B-splines in the signal.0.sepfir2d(ck. The output of convolutions can change depending on how boundaries are handled (this becomes increasingly more important as the number of dimensions in the data.-4.qspline2d. The command signal. derfilt) Alternatively we could have done: laplacian = array([[0.cspline2d(image. This function is ideally suited for reconstructing samples from spline coefﬁcients and is faster than signal. The savvy reader will have already noticed that the data samples are related to the knot coefﬁcients via a convolution operator.and two-dimensions (signal. signal.signal) 43 .convolve2d(ck. β o (x) for arbitrary order and x.[1. >>> from numpy import * >>> from scipy import signal.SciPy Reference Guide.8.0rc1 Using the property of B-splines that d2 β o (w) = β o−2 (w + 1) − 2β o−2 (w) + β o−2 (w − 1) dw2 it can be seen that y (x) = 1 ∆x2 cj β o−2 j x x x − j + 1 − 2β o−2 − j + β o−2 −j−1 ∆x ∆x ∆x . derfilt.pyplot as plt >>> >>> >>> >>> >>> image = misc. Signal Processing (scipy. the B-spline basis function can be approximated well by a zero-mean Gaussian function with standard-deviation equal to σo = (o + 1) /12 : β o (x) ≈ 1 x2 exp − 2 2σo 2πσo . and data-samples can be recovered exactly from the spline coefﬁcients by assuming them to be mirror-symmetric also.1].mode=’same’.1.-2.[0.sepfir2d was used to apply a separable two-dimensional FIR ﬁlter with mirror. misc >>> import matplotlib. The following code and Figure uses spline-ﬁltering to compute an edge-image (the second-derivative of a smoothed spline) of Lena’s face which is an array returned by the command lena. [1]. Thus.and third-order cubic spline coefﬁcients from equally spaced samples in one.astype(float32) derfilt = array([1.convolve2d which convolves arbitrary two-dimensional ﬁlters and allows for choosing mirror-symmetric boundary conditions.boundary=’symm’) 1.0]].set increases).0]. smoothing splines can be found to make the second-derivative less sensitive to random-errors.gauss_spline ).lena().qspline1d.float32) deriv2 = signal.cspline2d). For large o .sepfir2d(ck.0].1. the second-derivative signal can be easily calculated from the spline ﬁt. ∆x2 y (x)|x=n∆x = j cj δn−j+1 − 2cj δn−j + cj δn−j−1 . spline coefﬁcients are computed based on that assumption. If o = 3 . signal.symmetric boundary conditions to the spline coefﬁcients. then at the sample points.1.10.laplacian. The package also supplies a function ( signal. Release 0. [1]) + \ signal.8.bspline ) for evaluating the bspline basis function. if desired. so that simple convolution with the sampled B-spline function recovers the original data from the spline coefﬁcients. Currently the package provides functions for determining second. A function to compute this Gaussian for arbitrary x and o is also available ( signal. cn+1 − 2cn + cn−1 .0) deriv = signal. = Thus. signal.float32) ck = signal.

8.title(’Output of spline edge filter’) plt.figure() plt.gray() plt. SciPy Tutorial .gray() plt. There are two broad kinds 44 Chapter 1.figure() plt.2 Filtering Filtering is a generic name for any system that modiﬁes an input signal in some way.show() 0 100 200 300 400 500 0 Original image 100 200 300 400 500 >>> >>> >>> >>> >>> plt.imshow(image) plt. Release 0.show() 0 100 200 300 400 500 0 Output of spline edge filter 100 200 300 400 500 1.title(’Original image’) plt.SciPy Reference Guide. In SciPy a signal can be thought of as a Numpy array.10.0rc1 >>> >>> >>> >>> >>> plt. There are different kinds of ﬁlters for different kinds of operations.imshow(deriv) plt.

. .10. In most applications most of the elements of this matrix are zero and a different method for computing the output of the ﬁlter is employed.convolve . . let K + 1 be that value for which y [n] = 0 for all n > K + 1 and M + 1 be that value for which x [n] = 0 for all n > M + 1 . In this case. y [M + 1] = . 476. If the ﬂag has a value of ‘valid’ then only the middle K − M + 1 = (K + 1) − (M + 1) + 1 output values are returned where z depends on all of the values of the smallest input from h [0] to h [M ] . . . . . Full convolution of two one-dimensional signals can be expressed as ∞ y [n] = k=−∞ x [k] h [n − k] . x [0] h [M ] + x [1] h [M − 1] + · · · + x [M ] h [0] x [1] h [M ] + x [2] h [M − 1] + · · · + x [M + 1] h [0] . In other words only the values y [M ] to y [K] inclusive are returned. Of course. then the discrete convolution expression is min(n. . x [K] h [M ] . . . . y [M ] = x [0] h [0] = x [0] h [1] + x [1] h [0] = . 736 elements. Then. .signal) 45 .0) x [k] h [n − k] . Let x [n] deﬁne a one-dimensional signal indexed by the integer n. Signal Processing (scipy. The default value of ‘full’ returns the entire signal. Linear ﬁlters can always be reduced to multiplication of the ﬂattened Numpy array by an appropriate matrix resulting in another ﬂattened Numpy array. y [K] = y [K + 1] = . 719. . = x [0] h [2] + x [1] h [1] + x [2] h [0] . . x [K − M ] h [M ] + · · · + x [K] h [0] x [K + 1 − M ] h [M ] + · · · + x [K] h [1] . the full discrete convolution of two ﬁnite sequences of lengths K + 1 and M + 1 respectively results in a ﬁnite sequence of length K + M + 1 = (K + 1) + (M + 1) − 1. . If the ﬂag has a value of −1 ‘same’ then only the middle K values are returned starting at y M2 so that the output has the same length as the largest input. This equation can only be implemented directly if we limit the sequences to ﬁnite support sequences that can be stored in a computer. For example ﬁltering a 512 × 512 image with this method would require multiplication of a 5122 ×5122 matrix with a 5122 vector. This function takes as inputs the signals x. choose n = 0 to be the starting point of both sequences. y [K + M − 1] y [K + M ] = = x [K − 1] h [M ] + x [K] h [M − 1] Thus. . Convolution/Correlation Many linear ﬁlters also have the property of shift-invariance. . the matrix multiplication can be accomplished using Fourier transforms. and an optional ﬂag and returns the signal y.K) y [n] = k=max(n−M. At 4 bytes per element this would require 256GB of memory. .SciPy Reference Guide. One dimensional convolution is implemented in SciPy with the function signal.8. more explicitly the output of this operation is y [0] y [1] y [2] . h . Just trying to store the 5122 × 5122 matrix using a standard Numpy array would require 68. This means that the ﬁltering operation is the same at different locations in the signal and it implies that the ﬁltering matrix can be constructed from knowledge of one row (or column) of the matrix alone. 1. The optional ﬂag allows for speciﬁcation of which part of the output signal to return. .0rc1 of ﬁltering operations: linear and non-linear. . this is not usually the best way to compute the ﬁlter as the matrices and vectors involved may be huge. For convenience assume K ≥ M. Release 0.

. . Correlation is very similar to convolution except for the minus sign becomes a plus sign. Equivalent ﬂags are available for this operation to return the full K +M +1 length sequence (‘full’) or a sequence with the same size as the largest sequence starting at −1 w −K + M2 (‘same’) or a sequence where the values depend on all the values of the smallest sequence (‘valid’). = y [0] x [M − 1] + y [1] x [M ] The SciPy function signal. . Difference-equation ﬁltering A general class of linear one-dimensional ﬁlters (that includes convolution ﬁlters) are ﬁlters described by the difference equation N M M ).−n) y [k] x [n + k] . . . and edge-detection for an image. Release 0. signal. w [−1] w [0] w [1] w [2] .correlate can also take arbitrary N -dimensional arrays as input and return the N dimensional convolution of the two arrays on output. .M −n) w [n] = k=max(0. y [0] x [M ] . K] and x [n] = 0 outside of the range [0. enhancing. . .10. Convolution is mainly used for ﬁltering when one of the signals is much smaller than the other ( K linear ﬁltering is more easily accomplished in the frequency domain (see Fourier Transforms). the summation can simplify to min(K. SciPy Tutorial . .convolve can be used to construct arbitrary image ﬁlters to perform actions such as blurring. This ﬁnal option returns the K − M + 1 values w [M − K] to w [0] inclusive. The same input ﬂags are available for that case as well. . w [M − K] = w [M − K + 1] = . .convolve can actually take N -dimensional arrays as inputs and will return the N -dimensional convolution of the two arrays. The function signal. .correlate implements this operation. Assuming again that K ≥ M this is w [−K] = y [K] x [0] y [K − 1] x [0] + y [K] x [1] . . . . . y [K − M ] x [0] + y [K − M + 1] x [1] + · · · + y [K] x [M ] y [K − M − 1] x [0] + · · · + y [K − 1] x [M ] . w [−K + 1] = . . When N = 2.0rc1 This same function signal. Thus ∞ w [n] = k=−∞ y [k] x [n + k] is the (cross) correlation of the signals y and x. w [M − 1] w [M ] = y [1] x [0] + y [2] x [1] + · · · + y [M + 1] x [M ] = y [0] x [0] + y [1] x [1] + · · · + y [M ] x [M ] = y [0] x [1] + y [1] x [2] + · · · + y [M − 1] x [M ] = . . . = y [0] x [2] + y [1] x [3] + · · · + y [M − 2] x [M ] . M ] . For ﬁnite-length signals with y [n] = 0 outside of the range [0. otherwise ak y [n − k] = k=0 k=0 bk x [n − k] 46 Chapter 1.SciPy Reference Guide. .correlate and/or signal.

. Sometimes it is more convenient to express the initial conditions in terms of the signals x [n] and y [n] .10. If there are an even number of elements in the neighborhood. If initial conditions are provided.medfilt2d . then the ﬁnal conditions on the intermediate variables are also returned. a signal x and returns the vector y (the same length as x ) computed using the equation given above. 1. In this way. . It is not difﬁcult to show that for 0 ≤ m < K. The difference equation ﬁlter can be thought of as ﬁnding y [n] recursively in terms of it’s previous values a0 y [n] = −a1 y [n − 1] − · · · − aN y [n − N ] + · · · + b0 x [n] + · · · + bM x [n − M ] . Signal Processing (scipy. bK−1 x [n] + zK−1 [n − 1] − aK−1 y [n] bK x [n] − aK y [n] . . z1 [n] = . the convolution ﬁlter sequence h [n] could be inﬁnite if ak = 0 for k ≥ 1. The command signal.SciPy Reference Guide. The difference-equation ﬁlter is called using the command signal. However. . Often a0 = 1 is chosen for normalization. These could be used. desired. The median ﬁlter works by sorting all of the array pixel values in a rectangular region surrounding the point of interest. K−m−1 zm [n] = p=0 (bm+p+1 x [n − p] − am+p+1 y [n − p]) .8. y [n] z0 [n] = = b0 x [n] + z0 [n − 1] b1 x [n] + z1 [n − 1] − a1 y [n] b2 x [n] + z2 [n − 1] − a2 y [n] . Release 0. The sample median is the middle array value in a sorted list of neighborhood values.lfilter in SciPy. to restart the calculation in the same state. A specialized version that works only for two-dimensional arrays is available as signal. zK−2 [n] zK−1 [n] = = where K = max (N.medfilt .0rc1 where x [n] is the input sequence and y [n] is the output sequence. then the ﬁlter is computed along the axis provided. This can always be calculated as long as the K values z0 [n − 1] . . then the average of the middle two values is used as the median. this general class of linear ﬁlter allows initial conditions to be placed on y [n] for n < 0 resulting in a ﬁlter that cannot be expressed using convolution. The actual implementation equations are (assuming a0 = 1 ). . In other words. A general purpose median ﬁlter that works on N-dimensional arrays is signal. zK−1 [n − 1] are computed and stored at each time step.lfiltic performs this function. a. In addition. perhaps you have the values of x [−M ] to x [−1] and the values of y [−N ] to y [−1] and would like to determine what values of zm [−1] should be delivered as initial conditions to the difference-equation ﬁlter. the output at time n depends only on the input at time n and the value of z0 at the previous time. initial conditions providing the values of z0 [−1] to zK−1 [−1] can be provided or else it will be assumed that they are all zero.signal) 47 . The implementation in SciPy of this general difference equation ﬁlter is a little more complicated then would be implied by the previous equation. If we assume initial rest so that y [n] = 0 for n < 0 . . Using this formula we can ﬁnd the intial condition vector z0 [−1] to zK−1 [−1] given initial conditions on y (and x ). for example. Other ﬁlters The signal processing package provides many more ﬁlters as well. If x is N -dimensional. . the vector. If. then this kind of ﬁlter can be implemented using convolution. It is implemented so that only one signal needs to be delayed. . Median Filter A median ﬁlter is commonly applied when noise is markedly non-Gaussian or when it is desired to preserve edges. This command takes as inputs the vector b. The sample median of this list of neighborhood pixel values is used as the value for the output array. M ) . Note that bK = 0 if K > M and aK = 0 if K > N.

then the output is σ2 σ2 2 2 2 2 σx mx + 1 − σx x σx ≥ σ . So. similar to Fourier analysis. A general order ﬁlter allows the user to select which of the sorted values will be used as the output.10. based on a least squares ﬁt of sinusoids to data samples. Fourier analysis.3 Least-Squares Spectral Analysis (spectral) Least-squares spectral analysis (LSSA) is a method of estimating a frequency spectrum. 0 for negative frequencies and 1 for zero-frequencies. y= 2 mx σx < σ 2 . LSSA mitigates such problems. generally boosts long-periodic noise in long gapped records. all order ﬁlters use the array values in a region surrounding that pixel. Release 0. the most used spectral method in science. weak periodic signals. Lomb-Scargle Periodograms (spectral. The command to perform an order ﬁlter is signal. 48 Chapter 1. These array values are sorted and then one of them is selected as the output value. For the median ﬁlter. for example one could choose to pick the maximum in the list or the minimum. This is not the Wiener ﬁlter commonly described in image reconstruction problems but instead it is a simple.lombscargle) The Lomb-Scargle method performs spectral analysis on unevenly sampled data and is known to be a powerful way to ﬁnd. Hilbert ﬁlter The Hilbert transform constructs the complex-valued analytic signal from a real signal. The frequency dependent time offset τ is given by tan 2ωτ = Nt j Nt j sin 2ωtj cos 2ωtj . In the frequency domain.order_filter . . . and test the signiﬁcance of. The window for these estimates is an optional input parameter (default is 3 × 3 ).SciPy Reference Guide. To compute the output at a particular pixel. the normalized Lomb-Scargle periodogram at frequency f is 2 2 Nt Nt Xj cos ω(tj − τ ) Xj sin ω(tj − τ ) j j 1 Pn (f ) + . For a time series comprising Nt measurements Xj ≡ X(tj ) sampled at times tj where (j = 1. . 2 where mx is the local estimate of the mean and σx is the local estimate of the variance. assumed to have been scaled and shifted such that its mean is zero and its variance is unity. ω ≡ 2πf is the angular frequency. Nt Nt 2 2 2 j cos ω(tj − τ ) j sin ω(tj − τ ) Here. The order ﬁlter takes an additional argument besides the input array and the region mask that speciﬁes which of the elements in the sorted list of neighbor array values should be used as the output. the hilbert transform performs Y =X ·H where H is 2 for positive frequencies.0rc1 Order Filter A median ﬁlter is a speciﬁc example of a more general class of ﬁlters called order ﬁlters. local-mean ﬁlter. Nt ). Wiener ﬁlter The Wiener ﬁlter is a simple deblurring ﬁlter for denoising images. If σ is not given then it is estimated as the average of the local variances. 1. The parameter σ 2 is a threshold noise parameter. the sample median of the list of array values is used as the output. SciPy Tutorial . For example if x = cos ωn then y = hilbert (x) would return (except near the edges) y = exp (jωn) . Let x be the input signal.8. .

The output of these routines is also a two-dimensional array.linalg) 49 . Townsend. Release 0. “Fast calculation of the Lomb-Scargle periodogram using graphics processing units. CC − SS sτ = sin ωτ XC = j Nt Xj cos ωtj XS = j Nt Xj sin ωtj cos2 ωtj j Nt CC = SS = j Nt sin2 ωtj CS = j cos ωtj sin ωtj . all of the raw lapack and blas libraries are available for your use for even more speed. cτ = cos ωτ.0rc1 The lombscargle function calculates the periodogram using a slightly modiﬁed algorithm due to Townsend 1 which allows the periodogram to be calculated using only a single pass through the input arrays for each frequency.linalg) When SciPy is built using the optimized ATLAS LAPACK and BLAS libraries. 247-253. If you dig deep enough. some easier-to-use interfaces to these routines are described. which you can initialize with an appropriate Numpy array in order to get objects for which multiplication is matrix-multiplication instead of the default. 2010 1.SciPy Reference Guide. This requires Nf (2Nt + 3) trigonometric function evaluations giving a factor of ∼ 2 speed increase over the straightforward implementation.H. while the sums are Nt 1 (cτ XC + sτ XS)2 (cτ XS − sτ XC)2 + 2 2 CC + 2c s CS + s2 SS 2 cτ cτ SS − 2cτ sτ CS + s2 CC τ τ τ τ 2CS . The Astrophysical Journal Supplement Series. Linear Algebra (scipy.D. it has very fast linear algebra capabilities. 1 R. element-by-element multiplication.9. The equation is refactored as: Pn (f ) = and tan 2ωτ = Here. There is a matrix class deﬁned in Numpy.10. All of these linear algebra routines expect an object that can be converted into a 2-dimensional array. vol 191.”. In this section.9 Linear Algebra (scipy. References Some further reading and related software: 1. pp.

0.04]]) >>> from scipy import linalg >>> linalg.48 0. or using A.inv(A) array([[-1. -0.12.56. If you are going to be doing a lot of matrix-math.56. [ 0.SciPy Reference Guide. 5]. Usually B is denoted B = A−1 . 8]]) >>> A. For example. [ 0. suppose it is desired to solve the following simultaneous equations: x + 3y + 5z 2x + 5y + z 2x + 3y + 8z = = = 10 8 3 50 Chapter 1. [2. As an example. 1. A.1 Matrix Class The matrix class is initialized with the SciPy command mat which is just convenient short-hand for matrix. -0.9. An option for entering a symmetrix matrix is offered which can speed up the processing when applicable.I if A is a Matrix. [ 0.I matrix([[-1.88]. [2. the matrix inverse of the Numpy array. 0. In SciPy.10.04]]) Solving linear system Solving linear systems of equations is straightforward using the scipy command linalg.04 1 The following example demonstrates this computation in SciPy >>> A = mat(’[1 3 5.12. -0. 0. 3.56 0.36. This command expects an input matrix and a right-hand-side vector.solve. 0. 0.9.08.36]. 5. 2 3 8]’) >>> A matrix([[1. 0. 0. 2 5 1. [ 0. The solution vector is then computed. 3.36.88]. is obtained using linalg.16. One advantage of using the mat command is that you can enter two-dimensional matrices using MATLAB-like syntax with commas or spaces separating columns and semicolons separting rows as long as the matrix is placed in a string passed to mat . it is convenient to convert arrays into matrices using this command. −9 = 0. 1].16 −0.48. let 1 3 5 A = 2 5 1 2 3 8 then A−1 −37 9 1 14 2 = 25 4 −3 −1.2 Basic routines Finding Inverse The inverse of a matrix A is the matrix B such that AB = I where I is the identity matrix consisting of ones down the main diagonal.48.16. -0.12 0.36 . 0.36 0. SciPy Tutorial .inv (A) .88 22 0.0rc1 1.36].08. Release 0.08 −0.

28 5 10 1 129 = 5.I*b matrix([[-9. For example.28].SciPy Reference Guide.b) array([[-9. In SciPy the determinant can be calculated with linalg. Suppose aij are the elements of the matrix A and let Mij = |Aij | be the determinant of the matrix left by removing the ith row and j th column from A . Then for any row i.det(A) -25. 1 8 = 25 19 0. In SciPy this is computed as shown in this example: >>> A = mat(’[1 3 5. 2 5 1.76]]) Finding Determinant The determinant of a square matrix A is often denoted |A| and is a quantity often used in linear algebra. 2 5 1. [ 0.9. This function takes a rank-1 (vectors) or a rank-2 (matrices) array and an optional order argument (default is 2). 1.solve command which can be faster and more numerically stable.16]. [ 0. Based on these inputs a vector or matrix norm of the requested order is computed. [ 5.10.000000000000004 Computing norms Matrix and vector norms can also be computed with SciPy. [ 5.8.16 .3]’) >>> A. Linear Algebra (scipy.0rc1 We could ﬁnd the solution vector using a matrix inverse: x 1 y = 2 z 2 3 5 3 −1 −232 −9.det .16].76]]) >>> linalg. A wide range of norm deﬁnitions are available using different parameters to the order argument of linalg.76 8 3 However.norm .solve(A. it is better to use the linalg. In this case it however gives the same answer as shown in the following example: >>> A = mat(’[1 3 5.linalg) 51 .28]. the determinant of 1 3 5 A = 2 5 1 2 3 8 is |A| = = 1 5 3 1 8 −3 2 2 1 8 +5 2 2 5 3 1 (5 · 8 − 3 · 1) − 3 (2 · 8 − 2 · 1) + 5 (2 · 3 − 2 · 5) = −25. 2 3 8]’) >>> linalg. This is a recursive way to deﬁne the determinant where the base case is deﬁned by accepting that the determinant of a 1 × 1 matrix is the only matrix element. Release 0. |A| = j (−1) i+j aij Mij . 2 3 8]’) >>> b = mat(’[10.

The computed norm is max |xi | ord = inf min |xi | ord = −inf x = 1/ord ord |ord| < ∞. a global minimum will occur when ∂J =0= ∂c∗ n or cj j i ∗ fj (xi ) fn (xi ) ∗ cj fj (xi ) (−fn (xi )) j yi − i = i ∗ yi fn (xi ) A Ac where H = AH y {A}ij = fj (xi ) . then c = AH A −1 AH y = A† y where A† is called the pseudo-inverse of A. ord = inf maxi j |aij | min |aij | ord = −inf i j maxj ord = 1 i |aij | A = minj i |aij | ord = −1 max σi ord = 2 min σi ord = −2 trace (AH A) ord = ’fro’ where σi are the singular values of A . Theoretically. When AH A is invertible. and ‘fro’ (or ‘f’) Thus.pinv2 (uses a different method based on singular value decomposition) will ﬁnd A† given A. Notice that using this deﬁnition of A the model can be written y = Ac + . Solving linear least-squares problems and pseudo-inverses Linear least-squares problems occur in many branches of applied mathematics. Release 0.lstsq will solve the linear least squares problem for c given A and y . In addition linalg.10. In this problem a set of linear scaling coefﬁcients is sought that allow a model to ﬁt data. 52 Chapter 1. The command linalg. ±1.0rc1 For vector x . In particular it is assumed that data yi is related to data xi through a set of coefﬁcients cj and model functions fj (xi ) via the model yi = j cj fj (xi ) + i where i represents uncertainty in the data.SciPy Reference Guide. SciPy Tutorial . ± inf. The strategy of least squares is to pick the coefﬁcients cj to minimize 2 J (c) = i yi − j cj fj (xi ) . i |xi | For matrix A the only valid values for norm are ±2.pinv or linalg. the order parameter can be any real number including inf or -inf.

lstsq’) plt.10. .title(’Data fitting with linalg. and c2 = 4. The data shown below were generated using the model: yi = c1 e−xi + c2 xi where xi = 0.5]) plt.0 0. Linear Algebra (scipy.4 xi 0.plot(xi.newaxis].9.lstsq(A.rank.05*max(yi)*random.’x’.c2= 5. The ﬁrst uses the linalg.0.show() 5.lstsq algorithm while the second uses singular value decomposition. These two commands differ in how they compute the generalized inverse. 10 .1:1.0:100j] >>> yi2 = c[0]*exp(-xi2) + c[1]*xi2 >>> >>> >>> >>> >>> plt.sigma = linalg.0 i = r_[1:11] xi = 0.2 Data fitting with linalg.0rc1 The following example and ﬁgure demonstrate the use of linalg. then if M > N the generalized inverse is A† = AH A 1.resid.1i for i = 1 .2.0 Generalized inverse The generalized inverse is calculated using the command linalg.0.0 0.zi.SciPy Reference Guide. >>> from numpy import * >>> from scipy import linalg >>> import matplotlib.newaxis]] >>> c.lstsq 0.6 0.pyplot as plt >>> >>> >>> >>> >>> c1.5 4.1.pinv or linalg.lstsq and linalg. .yi2) plt. Release 0.1.5.0 3.5 3.zi) >>> xi2 = r_[0. Noise is added to yi and the coefﬁcients c1 and c2 are estimated using linear least squares.pinv for solving a dataﬁtting problem. c1 = 5 .linalg) −1 AH 53 .1*i yi = c1*exp(-xi)+c2*xi zi = yi + 0.xi2.xlabel(’$x_i$’) plt.0 4.randn(len(yi)) >>> A = c_[exp(-xi)[:. Let A be an M × N matrix.pinv2.3.xi[:.8 1.axis([0.5 5.

the command linalg. In addtion. For an N × N matrix. Release 0.SciPy Reference Guide. eigenvectors are only deﬁned up to a constant scale factor. 3 6 2 54 Chapter 1. the scaling factor for the eigenvec2 2 tors is chosen so that v = i vi = 1. SciPy Tutorial . In one popular form. By deﬁnition. there are N (not necessarily distinct) eigenvalues — roots of the (characteristic) polynomial |A − λI| = 0. linalg. Eigenvalues and eigenvectors The eigenvalue-eigenvector problem is one of the most commonly employed linear algebra operations. In SciPy. The standard eigenvalue problem is an example of the general eigenvalue problem for B = I. The eigenvectors. −1 . With it’s default optional arguments. are also sometimes called right eigenvectors to distinguish them from another set of left eigenvectors that satisfy H H vL A = λvL or AH v L = λ ∗ v L . consider ﬁnding the eigenvalues and eigenvectors of the matrix 1 5 2 A = 2 4 1 . When a generalized eigenvalue problem can be solved. it can also return vL and just λ by itself ( linalg. then it provides a decomposition of A as A = BVΛV−1 where V is the collection of eigenvectors into columns and Λ is a diagonal matrix of eigenvalues.9.0rc1 while if M < N matrix the generalized inverse is A# = AH AAH In both cases for M = N . There are several decompositions supported by SciPy. However.eigvals returns just λ as well).10. then A† = A# = A−1 as long as A is invertible. 1.eig returns λ and v. v .eig can also solve the more general eigenvalue problem Av A vL H = λBv = λ∗ BH vL for square matrices A and B. As an example. the eigenvalue-eigenvector problem is to ﬁnd for some square matrix A scalars λ and corresponding vectors v such that Av = λv.3 Decompositions In many applications it is useful to decompose a matrix using other representations.

linalg) 55 .svd will return U . 1. The roots of this polynomial are the eigenvalues of A : λ1 λ2 λ3 = = 7.2997.39012063 0. 3 6 2]’) >>> la. To obtain the matrix Σ use linalg.9. 2 Deﬁne these positive eigenvalues as σi . 2 3 A hermitian matrix D satisﬁes DH = D. In addtion.44941741 -0. 1. Let A be an M × N matrix with M and N arbitrary. the singular values are collected in an M × N zero matrix Σ with main diagonal entries set to the singular values. Every matrix has a singular value decomposition.28380519 -0.90730751 0. The square-root of these are called singular values of A. VH . Sometimes.5297175 -0.95791620491+0j) (-1. there are at most min (M.1] [-0.SciPy Reference Guide.30763439] >>> print v[:.71932146] >>> print v[:. The eigenvectors of H A A are collected by columns into an N × N unitary 3 matrix V while the eigenvectors of AAH are collected by columns in the unitary matrix U .T >>> print max(ravel(abs(A*v1-l1*v1))) 8.2] [ 0.28662547 0. and σi as an array of the singular values.87593408] >>> print sum(abs(v**2).9579 0. Then A = UΣVH is the singular-value decomposition of A. Linear Algebra (scipy. Release 0. the singular values are called the spectrum of A.eig(A) >>> l1. >>> from scipy import linalg >>> A = mat(’[1 5 2.] >>> v1 = mat(v[:.10.diagsvd.l3 = la >>> print l1.0] [-0.svd . l3 (7.0rc1 The characteristic polynomial is |A − λI| = (1 − λ) [(4 − λ) (2 − λ) − 6] − 5 [2 (2 − λ) − 3] + 2 [12 − 3 (4 − λ)] = −λ3 + 7λ2 + 8λ − 3. A unitary matrix D satisﬁes DH D = I = DDH so that D−1 = DH .2577 The eigenvectors corresponding to each eigenvalue can be found using the original equation.299748500767+0j) >>> print v[:. The following example illustrates the use of linalg. The command linalg.axis=0) [ 1. 1.881784197e-16 Singular value decomposition Singular Value Decompostion (SVD) can be thought of as an extension of the eigenvalue problem to matrices that are not square. 2 4 1. The eigenvectors associated with these eigenvalues can then be found.0]).v = linalg.l2. It is known that the eigenvalues of square hermitian matrices are real and non-negative. N ) identical non-zero eigenvalues of AH A and AAH . = −1. The matrices AH A and AAH are square hermitian matrices 2 of size N × N and M × M respectively.25766470568+0j) (0. l2.

svd(A) >>> Sig = mat(linalg. N ) ) with unit-diagonal. the equation can be solved for Uxi and ﬁnally xi very rapidly using forward. Because L is lower-triangular.SciPy Reference Guide. suppose we are going to solve Axi = bi for many different bi .lu .18652536e-16 -7. If the intent for performing LU decomposition is for solving linear systems then the command linalg. 0. The SciPy command for this decomposition is linalg. 2. ] [ 0. Such a decomposition is often useful for solving many simultaneous equations where the left-hand-side does not change but the right hand side does.70710678 0.]] LU decomposition The LU decompostion ﬁnds a representation for the M × N matrix A as A = PLU where P is an M × M permutation matrix (a permutation of the rows of the identity matrix). For example. 1. SciPy Tutorial . then decompositions of A can be found so that A = A = UH U LLH 56 Chapter 1.70710678] [-0.and back-substitution.10.70710678]] >>> print Sig [[ 5. mat(Vh) >>> print U [[-0. An initial time spent factoring A allows for very rapid solution of similar systems of equations in the future. Vh = mat(U).92450090e-01 1.62250449e-01 1.N = A.92450090e-01]] >>> print A [[1 3 2] [1 2 3]] >>> print U*Sig*Vh [[ 1. L is in M × K lower triangular or trapezoidal matrix ( K = min (M.diagsvd(s.M.72165527e-01 -6. 3.07106781e-01 7.shape >>> U. Cholesky decomposition Cholesky decomposition is a special case of LU decomposition applicable to Hermitian positive deﬁnite matrices. 3.lu_solve to solve the system for each new right-hand-side. 2.70710678 -0. 0.80413817e-01 -6.80413817e-01] [ -6.N)) >>> U.s.0rc1 >>> A = mat(’[1 3 2. and U is an upper triangular or trapezoidal matrix.Vh = linalg. The LU decomposition allows this to be written as PLUxi = bi . Release 0.19615242 0.] [ 1.07106781e-01] [ -9.lu_factor should be used followed by repeated applications of the command linalg. When A = AH and xH Ax ≥ 0 for all x . 1 2 3]’) >>> M. ]] >>> print Vh [[ -2.

cho_solve routines that work similarly to their LU decomposition counterparts.57756680e-16]] >>> print abs(Z1-Z2) # different [[ 0.51260928 0.55463542e+00j -0.Z1 = linalg.37016050e-17j] [ 0.00296604e-16 1. 0.06833781 1.schur(A) >>> T1.78947961 -0.57754789] [ 0.qr . A . Release 0.schur ﬁnds the Schur decomposition while the command linalg.00000000e+00 4.24357637e-14 2. Schur decomposition For a square N × N matrix.rsf2csf converts T and Z from a real Schur form to a complex Schur form. The command linalg.rsf2csf(T. The Schur form is especially useful in calculating functions of matrices. however. Note.99258408e-01j 1. The following example illustrates the schur decomposition: >>> from scipy import linalg >>> A = mat(’[1 3 2.’complex’) >>> T2.00000000 +0.00000000e+00j 0.cho_factor and linalg. The command linagl.54993766 -1. 1 4 5.Z) >>> print T [[ 9.linalg) 57 .00000000e+00 0.Z = linalg.69027615e-01j] [ 0.00000000e+00 4.65498528] [ 0.88619748 +5. 2 3 6]’) >>> T.valued eigenvalues.00000000 +0. For using cholesky factorization to solve systems of equations there are also linalg.99258408e-01j]] >>> print abs(T1-T2) # different [[ 1.06493862 +1. For a real schur form both T and Z are real-valued when A is real-valued. The command for QR decomposition is linalg.54993766]] >>> print T2 [[ 9.Z2 = linalg. 0. Linear Algebra (scipy.10. that in SciPy independent algorithms are used to ﬁnd QR and SVD decompositions.0rc1 where L is lower-triangular and U is upper triangular.90012467 +0.00000000e+00j 0.32436598 +1.09205364e+00 6.00000000 +0.cholesky computes the cholesky factorization.SciPy Reference Guide.23662249] 1. Notice that L = UH .9.83223097e+00] [ 0.00000000e+00j 0.schur(A.54993766 -8. QR decomposition The QR decomposition (sometimes called a polar decomposition) works for any M × N array and ﬁnds an M × M unitary matrix Q and an M × N upper-trapezoidal matrix R such that A = QR. the Schur decomposition ﬁnds (not-necessarily unique) matrices T and Z such that A = ZTZH where Z is a unitary matrix and T is either upper-triangular or quasi-upper triangular depending on whether or not a real schur form or complex schur form is requested.90012467 1.00000000e+00j -0.56028192e-01] [ 0. When A is a real-valued matrix the real schur form is only quasi-upper triangular because 2 × 2 blocks extrude from the main diagonal corresponding to any complex.54993766 +8.10591375 0. Notice that if the SVD of A is known then the QR decomposition can be found A = UΣVH = QR implies that Q = U and R = ΣVH .

92966151e-16] [ 1.88899660e-15 8.11857169 0.Z1. SciPy Tutorial . k! The command linalg.29617525] [ 0.Z1. This method is implemented in linalg.H) # same [[ 1.88611201e-16 4.44089210e-16 1.55373104e-15]] 1.44089210e-16] [ 4.34058710e-16 8. this serves as a useful representation of a matrix function. The preferred method for implementing the matrix exponential is to use scaling and a Padé approximation for ex .T2. The matrix logarithm can be obtained with linalg.75656818 0.95109973e-16 8.T1.11291538e-15 1.12624999 0.48694940e-16 8.9. Due to poor convergence properties it is not often used.44089210e-16 4.18773089e-18] [ 1.Z.H) # same [[ 1.33582317e-15 3.00043248e-15 2. This algorithm is implemented as linalg.22975038]] >>> T.10.SciPy Reference Guide.Z.T2.4 Matrix Functions Consider the function f (x) with Taylor series expansion ∞ f (x) = k=0 f (k) (0) k x . Exponential and logarithm functions The matrix exponential is one of the more common matrix functions.logm .expm3 uses this Taylor series deﬁnition to compute the matrix exponential.55749485e-15] [ 2.5585604 0. It can be deﬁned for square matrices as ∞ eA = k=0 1 k A .88178420e-16 4. The inverse of the matrix exponential is the matrix logarithm deﬁned as the inverse of the matrix exponential. Another method to compute the matrix exponential is to ﬁnd an eigenvalue decomposition of A : A = VΛV−1 and note that eA = VeΛ V−1 where the matrix exponential of the diagonal matrix Λ is just the exponential of its elements.15464861e-14]] >>> print abs(A-Z2*T2*Z2.Z2)) >>> print abs(A-Z*T*Z. k! While. 58 Chapter 1.33226763e-15 8.H) # same [[ 3.T1.15463228e-14 1. k! A matrix function can be deﬁned using this Taylor series for the square matrix A as ∞ f (A) = k=0 f (k) (0) k A . A ≡ exp (log (A)) .44089210e-16 2.11022302e-16 4.22301403e-15 5.66453526e-15]] >>> print abs(A-Z1*T1*Z1. Release 0.(T.Z2 = map(mat.33228956e-15 1. it is rarely the best way to calculate a matrix function.88178420e-16] [ 8.77322008e-15] [ 3.expm .0rc1 [ 0.44927041e-15 9.expm2 .

cos .18296399+0.27448286+0.9.x)) >>> print A [[ 0. and tanh can also be deﬁned for matrices using the familiar deﬁnitions: sinh (A) cosh (A) tanh (A) = = = eA − e−A 2 eA + e−A 2 −1 [cosh (A)] sinh (A) .SciPy Reference Guide.541453 ] [ 0.j] 1.77255139 -0.cosm.72599893 -0.sinm.3) >>> B = linalg.j] >>> print special.funm. 2 [cos (A)] Hyperbolic trigonometric functions sin (A) .tanm respectively. linalg.7556849 ]] >>> print linalg.4837898 ]] >>> print B [[ 0.21846476+0. The matrix sin and cosine can be deﬁned using Euler’s identity as sin (A) cos (A) The tangent is tan (x) = and so the matrix tangent is deﬁned as sin (x) −1 = [cos (x)] sin (x) cos (x) −1 = = ejA − e−jA 2j jA e + e−jA . The hyperbolic trigonemetric functions sinh .eigvals(B) [ 0.sinhm.10.78397086 0.34105276 0.j 0.73855618 0. random.79570345] [ 0. and linalg. This command takes the matrix and an arbitrary Python function. and tan are implemented for matrices in linalg.27612103 -0.jv(0.j 0. Arbitrary function Finally.jv(0. and linalg. Note that the function needs to accept complex numbers as input in order to work with this algorithm.tanhm.98810383+0. any arbitrary function that takes one complex number and returns a complex number can be called as a matrix function using the command linalg. cosh . linalg >>> A = random. For example the following code computes the zeroth-order Bessel function applied to a matrix.21754832 0.lambda x: special.20545711 -0.99164854+0.99164854+0.23422637] [-0. Release 0.91262611+0.68043507 0.98810383+0. Linear Algebra (scipy. These matrix functions can be found using linalg.22721101] [-0. >>> from scipy import special.j 0. linalg.coshm . linalg.27448286+0.eigvals(A) [ 1.j 0.rand(3.linalg) 59 .j] >>> print linalg.eigvals(A)) [ 0.65767207 0.j 0.27426769 0.72578091 0.0rc1 Trigonometric functions The trigonometric functions sin .j -0.funm(A. It then implements an algorithm from Golub and Van Loan’s book “Matrix Computations “to compute function applied to the matrix using a Schur decomposition.

1.companion scipy. The result of this structure is that ARPACK is able to ﬁnd eigenvalues and eigenvectors of any linear function mapping a vector to a vector.eigsh. it requires only left-multiplication by the matrix in question.block_diag scipy.linalg.circulant scipy. Construct a Toeplitz matrix. the Bessel function has acted on the matrix eigenvalues. 1. eigsh) 60 Chapter 1. eigs provides interfaces to ﬁnd the eigenvalues/vectors of real or complex nonsymmetric square matrices.10.10 Sparse Eigenvalue Problems with ARPACK 1. eigsh) • which = ’SM’ : Eigenvectors with smallest magnitude (eigs. Construct a circulant matrix. For examples of the use of these functions.eigs and scipy. Release 0.sparse.linalg. Construct a Hilbert matrix.5 Special matrices SciPy and NumPy provide several functions for creating special matrices that are frequently used in engineering and science.linalg. by virtue of how matrix analytic functions are deﬁned.vander Description Create a block diagonal matrix from the provided arrays.SciPy Reference Guide.hilbert scipy.linalg.1 Introduction ARPACK is a Fortran package which provides routines for quickly ﬁnding a few eigenvalues/eigenvectors of large sparse matrices.9. 1. Type block diagonal circulant companion Hadamard Hankel Hilbert Inverse Hilbert Leslie Toeplitz Van der Monde Function scipy.2 Basic Functionality ARPACK can solve either standard eigenvalue problems of the form Ax = λx or general eigenvalue problems of the form Ax = λM x The power of ARPACK is that it can compute only a speciﬁed subset of eigenvalue/eigenvector pairs. while eigsh provides interfaces for real-symmetric or complex-hermitian matrices.toeplitz numpy.10.0rc1 Note how.sparse.linalg. Generate a Van der Monde matrix. This is accomplished through the keyword which.linalg.linalg.leslie scipy.linalg.linalg. This operation is performed through a reverse-communication interface.hadamard scipy. Create a Leslie matrix.linalg.10.invhilbert scipy. All of the functionality provided in ARPACK is contained within the two high-level interfaces scipy. Construct the inverse of a Hilbert matrix. In order to ﬁnd these solutions. Construct a Hadamard matrix. Create a companion matrix. Construct a Hankel matrix.linalg. see their respective docstrings. SciPy Tutorial . The following values of which are available: • which = ’LM’ : Eigenvectors with largest magnitude (eigs.hankel scipy.

Release 0. >>> >>> >>> >>> >>> >>> >>> >>> import numpy as np from scipy. eigenvalues with large magnitudes. For the generalized eigenvalue problem Ax = λM x it can be shown that (A − σM )−1 M x = νx where ν= 1 λ−σ 1.SciPy Reference Guide. For this example.linalg.100)) . we’ll construct a symmetric. or a general linear operator derived from scipy.dot(X. for simplicity.10.sparse.csr_matrix.LinearOperator.0.0rc1 • which = ’LR’ : Eigenvectors with largest real part (eigs) • which = ’SR’ : Eigenvectors with smallest real part (eigs) • which = ’LI’ : Eigenvectors with largest imaginary part (eigs) • which = ’SI’ : Eigenvectors with smallest imaginary part (eigs) • which = ’LA’ : Eigenvectors with largest amplitude (eigsh) • which = ’SA’ : Eigenvectors with smallest amplitude (eigsh) • which = ’BE’ : Eigenvectors from both ends of the spectrum (eigsh) Note that ARPACK is generally better at ﬁnding extremal eigenvalues: that is.T) #create a symmetric matrix We now have a symmetric matrix X with which to test the routines.ndarray instances. sparse matrices such as scipy. positive-deﬁnite matrix.4 Examples Imagine you’d like to ﬁnd the smallest and largest eigenvalues and the corresponding eigenvectors for a large matrix. 1.random.random((100. Sparse Eigenvalue Problems with ARPACK 61 .10.sparse. A better approach is to use shift-invert mode.sparse.random. using which = ’SM’ may lead to slow execution time and/or anomalous results.set_printoptions(suppress=True) np.10. First compute a standard eigenvalue decomposition using eigh: 1. In particular.seed(0) X = np. ARPACK can handle many forms of input: dense matrices such as numpy.3 Shift-Invert Mode Shift invert mode relies on the following observation.linalg import eigsh np.10.linalg import eigh from scipy. X.5 X = np.

tol=1E-2) >>> print evals_all[:3] [ 0.0003783 0. 1. -1. evecs_all[:. Release 0. ARPACK contains a mode that allows quick determination of non-external eigenvalues: shift-invert mode. evecs_all = eigh(X) As the dimension of X grows.00000023 0. 1.00000037 0. evecs_small = eigsh(X. ARPACK recovers the desired eigenvalues.00122714 0. evecs_large = eigsh(X.dot(evecs_large.T. which=’SM’) scipy. In this case.00715878] >>> print evals_small [ 0.ArpackNoConvergence: ARPACK error -1: No convergence (1001 iterations.00000024 -0. but the computation time is much longer. evecs_all[:.0003783 0. the eigenvectors are orthogonal. maxiter=5000) >>> print evals_all[:3] [ 0.00122714 0. 0.]] The results are as expected.T. 0. this mode involves transforming the eigenvalue problem to an equivalent problem with different eigenvalues. As mentioned above. 0. 3.] [ 0. ARPACK is not quite as adept at ﬁnding small eigenvalues.linalg.00000031 -0. We could increase the tolerance (tol) to lead to faster convergence: >>> evals_small. 3.10.T.]] We get the results we’d hoped for. SciPy Tutorial .00000056] [ 0. 0. 0. evecs_all[:. so our small eigenvalues λ become large eigenvalues ν.00715881] >>> print np.dot(evecs_small. evecs_small = eigsh(X.19467646] >>> print np. which=’LM’) >>> print evals_all[-3:] [ 29.] [ 0.00122714 0.] [-0.00715878] >>> print np.99999999 0. 3. 0. We see that as mentioned above. ARPACK can be a better option. as we’d expect. we hope to ﬁnd eigenvalues near zero.sparse. evecs_small = eigsh(X.-3:]) [[-1. which=’LM’) >>> print evals_all[:3] [ 0.00037831 0.00122714 0.99999999 0. 0. There are a few ways this problem can be addressed. 3. evecs_small = eigsh(X.00715878] 62 Chapter 1. which=’SM’.99999852]] This works.0rc1 >>> evals_all. -1.SciPy Reference Guide. Another option is to increase the maximum number of iterations (maxiter) from 1000 to 5000: >>> evals_small.0003783 0.05821805 31. The transformed eigenvalues will then satisfy ν = 1/(σ − λ) = 1/λ.arpack. which=’SM’. Furthermore. Fortunately. First let’s compute the largest eigenvalues (which = ’LM’) of X and compare them to the known results: >>> evals_large. 0/3 eigenvectors converged) Oops. >>> evals_small.arpack.1446102 30.] [-0. 0. sigma=0. this routine becomes very slow.1446102 30. so we’ll choose sigma = 0.dot(evecs_small.00715878] >>> print evals_small [ 0. Now let’s attempt to solve for the eigenvalues with smallest magnitude: >>> evals_small.:3]) [[ 0. and they match the previously known results. 3.:3]) [[ 1.00122714 0.0003783 0.00000049] [-0.19467646] >>> print evals_large [ 29.eigen. but we lose the precision in the results.05821805 31. Especially if only a few eigenvectors and eigenvalues are needed.

06642272] >>> print evals_mid [ 0.stats) 63 .evals_all)))[-3:] >>> print evals_all[i_sort] [ 1.0rc1 >>> print evals_small [ 0. 0. Statistics (scipy.16577199] >>> print np.sparse. The user need not worry about the details.11.dot(evecs_small.argsort(abs(1. Note: The following is work in progress 1.]] The eigenvalues come out in a different order.stats) 1. Say you desire to ﬁnd internal eigenvalues and eigenvectors. but the operation can also be speciﬁed by the user. evecs_all[:. Note that the transformation from ν → λ takes place entirely in the background. 1.stats and a fairly complete listing of these functions can be had using info(stats).eigs for details.] [-0. See the docstring of scipy. / (1 . Random Variables There are two general distribution classes that have been implemented for encapsulating continuous random variables and discrete random variables .11.dot(evecs_mid. 0.2 Distributions First some imports 1. -0.16577199 0.5 References 1.T.eigsh and scipy. which=’LM’) >>> i_sort = np.11.00715878] >>> print np.linalg.i_sort]) [[-0. 1. 1.linalg.10.]] We get the results we were hoping for. 1. 0. Note that the shift-invert mode requires the internal solution of a matrix inverse. 0. -0.11 Statistics (scipy. All of the statistics functions are located in the sub-package scipy.] [ 0. sigma=1. evecs_mid = eigsh(X. Over 80 continuous random variables and 10 discrete random variables have been implemented using these classes.85081388 1.sparse.] [-0.g.] [ 1. The list of the random variables available is in the docstring for the stats subpackage.00122714 0.T.06642272 1.85081388 1. e. 0. -0.1 Introduction SciPy has a tremendous number of basic statistics routines with more easily added by the end user (if you create one please contribute it).0003783 0. those nearest to λ = 1. -1.:3]) [[ 1.10. 3. This is taken care of automatically by eigsh and eigs. but they’re all there. evecs_all[:.SciPy Reference Guide. The shift-invert mode provides more than just a fast way to obtain a few small eigenvalues. Release 0. with much less computational time. Simply set sigma = 1 and ARPACK takes care of the rest: >>> evals_mid.

DeprecationWarning) We can obtain the list of available distribution through introspection: >>> dist_continu = [d for d in dir(stats) if . for example veccdf or xa and xb are for internal calculation.d). The main methods we can see when we list the methods of the frozen distribution: >>> print dir(my_nct) #reformatted [’__class__’. ’isf’. upper: %s’ % (stats.. ’kwds’.ppf(0. isinstance(getattr(stats.nct. ’__reduce__’. isinstance(getattr(stats. len(dist_continu) number of continuous distributions: 84 >>> print ’number of discrete distributions: ’. stats..df. 2. number of >>> print . stats. Release 0. ’__delattr__’.d). ’entropy’.. upper: 1.simplefilter(’ignore’. 2.numargs.. ’pdf’.5) 2. no leading underscore.pdf(x.shapes) arguments: 2.nct. bounds of ’number of arguments: %d. i. shape parameters: df. stats.56880722561 >>> my_nct = stats. which is the inverse of the cdf: >>> print stats. nc > 0.rv_continuous)] >>> dist_discrete = [d for d in dir(stats) if . ’__new__’. ’__str__’. ’__reduce_ex__’.ppf(0.#INF We can list all methods and properties of the distribution with dir(stats.nc) = -------------------------------------------------2**df*exp(nc**2/2)*(df+x**2)**(df/2) * gamma(df/2) for df > 0. ’dist’.5) >>> print my_nct. As an example. ’cdf’.nct) prints the complete docstring of the distribution. ’__setattr__’. len(dist_discrete) number of discrete distributions: 12 Distributions can be used in one of two ways.SciPy Reference Guide.. ’__module__’. Instead we can print just some basic information: >>> print stats.extradoc #contains the distribution specific docs Non-central Student T distribution df**(df/2) * gamma(df+1) nct.nct(10. ’__weakref__’. ’__dict__’.5.#INF. that are not named as such.rv_discrete)] >>> print ’number of continuous distributions:’. we can get the median of the distribution by using the percent point function. ’ppf’.nct..nc ’bounds of distribution lower: %s. ’__repr__’. ’__doc__’. 10. ’__init__’.nct). ’rvs’.e. ’stats’] The main public methods are: • rvs: Random Variates • pdf: Probability Density Function 64 Chapter 1. ’args’.10.nct...5) 2. SciPy Tutorial . ppf.b) distribution lower: -1.a. >>> print . shape parameters: %s’% (stats.nct. ’pmf’. ’__getattribute__’.nct.56880722561 help(stats.0rc1 >>> >>> >>> >>> import numpy as np from scipy import stats import warnings warnings. either by passing all distribution parameters to each method call or by freezing the parameters for the instance of the distribution. ’sf’. stats. ’__hash__’. ’moment’. Some of the methods are private methods.

cdf.01]. [[10].37218364. e. 0.t. and scale is not a valid keyword parameter. we can obtain the 10% tail for 10 d.81246112.e.1. rgh 1.68099799]) Performance and Remaining Issues The performance of the individual methods. we can calculate the critical values for the upper tail of the t distribution for different probabilites and degrees of freedom.76376946].o. 2.1.05. Statistics (scipy.0rc1 • cdf: Cumulative Distribution Function • sf: Survival Function (1-CDF) • ppf: Percent Point Function (Inverse of CDF) • isf: Inverse Survival Function (Inverse of SF) • stats: Return mean.f.o.37218364.f. The location parameter.t.o. (Fisher’s) skew.special or numpy. As an example. this is the same as >>> stats.f. all other methods can be derived using numeric integration and root ﬁnding. then element wise matching is used.loc) / scale. To deﬁne a distribution.01]. Discrete distribution have most of the same basic methods. in terms of speed. pdf.01].81246112. sf.t. 0. are available.71807918]) If both. 1.g.1. 0.11. either through analytic formulas or through special functions in scipy.f.79588482.10. 1. As an example. by >>> stats. 11.isf([0.05. 0. 11) array([ 1. ppf.36343032. and isf are vectorized with np..vectorize. keyword loc can be used to shift the distribution. and the 1% tail for 12 d. 2. 0.t.01].o. variance. 1.isf([0. only one of pdf or cdf is necessary. The results of a method are obtained in one of two ways. 1. 2. >>> stats. i.SciPy Reference Guide. [11]]) array([[ 1. [10.76376946]) >>> stats. requires that the method is directly speciﬁed for the given distribution. the 5% tail for 11 d.37218364.71807918]]) Here.79588482.random for rvs.05. 0. 10) array([ 1.79588482.36343032. for the standard normal distribution location is the mean and scale is the standard deviation. the ﬁrst row are the critical values for 10 degrees of freedom and the second row is for 11 d. such as ﬁt.isf([0. 2.1.isf([0.stats) 65 . [ 1. either by explicit calculation or by a generic algorithm that is independent of the speciﬁc distribution. probabilities and degrees of freedom have the same array shape. no estimation methods. varies widely by distribution and method. or (Fisher’s) kurtosis • moment: non-central moments of the distribution The main additional methods of the not frozen distribution are related to the estimation of distrition parameters: • ﬁt: maximum likelihood estimation of distribution parameters. 2. For example. The basic methods. and the usual numpy broadcasting is applied. 0. 1. The standardized distribution for a random variable x is obtained through (x . These are usually relatively fast calculations. 12]) array([ 1.05. The generic methods are used if the distribution does not specify any explicit calculation. Release 0. however pdf is replaced the probability mass function pmf. Explicit calculation. including location and scale • ﬁt_loc_scale: estimation of location and scale when shape parameters are given • nnlf: negative log likelihood function • expect: Calculate the expectation of a function against the pdf or pmf All continuous distributions take loc and scale as keyword parameters to adjust the location and scale of the distribution. These indirect methods can be very slow.. 0.

while one million random variables from the standard normal or from the t distribution take just above one second. decimals=7)). kurtosis = -0.truncnorm.3302.4f. normdiscrete. variance = %6. however a few issues remain: • skew and kurtosis.10. we obtain access to all methods of discrete distributions. If the last two requirements are not satisﬁed an exception may be raised or the resulting numbers may be incorrect. Also.. SciPy Tutorial .rv_discrete to generate a discrete distribution that has the probabilites of the truncated normal for the intervalls centered around the integers.0.4f.5) / npointsh * nbound # bin limits for the truncnorm gridlimits = grid .round(probs. normbound)) gridint = grid normdiscrete = stats..0000. for some distribution using a maximum likelihood estimator might inherently not be the best choice. 2. Example: discrete distribution rv_discrete In the following we use stats.rv_discrete(values = (gridint.5. The next example shows how to build our own discrete distribution.stats(moments = ’v’)) Generate a random sample and compare observed frequencies with probabilities 66 Chapter 1.stats(moments = ’mvsk’) mean = -0.0rc1 = stats. The keyword name is required. • the maximum likelihood estimation in ﬁt does not work with default starting parameters for all distributions and the user needs to supply good starting parameters. pk) which describes only those values of X (xk) that occur with nonzero probability (pk). The distributions in scipy. The support points of the distribution xk have to be integers. Release 0.cdf(gridlimitsnorm.arange(-npointsh.5 grid = grid[:-1] probs = np. name=’normdiscrete’) From the docstring of rv_discrete: “You can construct an aribtrary discrete rv where P{X=xk} = pk by passing to the rv_discrete initialization method (through the values= keyword) a tuple of sequences (xk. size=100) creates random variables in a very indirect way and takes about 19 seconds for 100 random variables on my computer.diff(stats. a few incorrect results may remain..stats have recently been corrected and improved and gained a considerable test suite. 3rd and 4th moments and entropy are not thoroughly tested and some coarse testing indicates that there are still some incorrect results left. 2.SciPy Reference Guide. • the distributions have been tested over some range of parameters.” There are some requirements for this distribution to work.rvs(0. 1) # integer grid gridlimitsnorm = (grid-0.4f.4f’% \ . >>> print ’mean = %6. skew = %6. >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> . variance = 6.sqrt(normdiscrete. however in some corner ranges. npoints = 20 # number of integer support points of the distribution minus 1 npointsh = npoints / 2 npointsf = float(npoints) nbound = 4 # bounds for the truncated normal normbound = (1+1/npointsf) * nbound # actual bounds of truncated normal grid = np. -normbound.gausshyper.0076 >>> nd_std = np. After deﬁning the distribution. np. and more examples for the usage of the distributions are shown below together with the statistical tests. kurtosis = %6. Also. I needed to limit the number of decimals. skew = 0. 2. npointsh+2..0000.

00000000e+00 1.00000000e+00 2.00000000e+00 5.65568919e+00] [ 8.00000000e+00 0.00000000e+00 1.00000000e+00 4.00000000e+00 1.00000000e+00 0.rvs(size=n_sample) >>> rvsnd = rvs >>> f.10 0.24137683e+01] [ 5.vstack([gridint.stats) 67 .00000000e+00 0.78004747e+01] [ 3.00000000e+00 0.10000000e+01 5.00000000e+00 2.00000000e+00 8. f.SciPy Reference Guide.10137298e+01] [ 6.00000000e+00 3.00000000e+01 0.32455414e+01] [ 2.95019349e-02] [ -9.00 Frequency and Probability of normdiscrete true sample Frequency -10-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 1.00000000e+00 5.89503370e+01] [ -2.10000000e+01 7.00000000e+01 3.12 0.T >>> print sfreq [[ -1.16 0.90000000e+01 7.00000000e+00 7.06497902e-01] [ 9.02 0.00000000e+00 1.06 0.65568919e+00] [ -6.62125309e+00] [ -5.10000000e+01 1.10.50000000e+01 5.32294142e-01] [ 1.24137683e+01] [ -3.00000000e+00 5.08 0.00000000e+00 2.00000000e+00 4.00000000e+00 1.00000000e+00 3.70000000e+01 3.11.00000000e+00 1.78004747e+01] [ -1.00000000e+00 1. Statistics (scipy.70000000e+01 2. l = np.00000000e+00 5.04 0.95019349e-02]] 0. bins=gridlimits) >>> sfreq = np.32455414e+01] [ 0.seed(87655678) # fix the seed for replicability >>> rvs = normdiscrete.60000000e+01 2. probs*n_sample]).00000000e+01 0.06497902e-01] [ -7.92618251e+01] [ 1.00000000e+00 4.random.40000000e+01 7.00000000e+00 1.62125309e+00] [ 7.histogram(rvs.10137298e+01] [ -4.0rc1 >>> n_sample = 500 >>> np.32294142e-01] [ -8.14 0. Release 0.00000000e+00 5.00000000e+00 2.89503370e+01] [ 4.00000000e+00 7.18 0.00000000e+00 9.

pval = stats.max(). This also veriﬁes. Descriptive Statistics x is a numpy array.var(x) 0. those are set to their default values zero and one.sum(). whether the random numbers are generated correctly The chisquare test requires that there are a minimum number of observations in each bin. x. we create some random variables. we can test. np.3 Analysing One Sample First. SciPy Tutorial .0rc1 1.8 0.t.g.2 0.466 pvalue = 0.random. x. f[-5:].11.78975572422 >>> print x. whether our sample was generated by our normdiscrete distribution. f[5:-5]. we set the required shape parameter of the t distribution. to 10. As an example we take a sample from the Student t distribution: >>> np.4090 The pvalue in this case is high.hstack([probs[:5]. pval) chisquare for normdiscrete: chi2 = 12. probs[5:-5].mean().26327732981 -3.sum(). and we have direct access to all array methods.4f’ % (ch2.0140610663985 1.SciPy Reference Guide.28899386208 68 Chapter 1.0 Cumulative Frequency and CDF of normdiscrete true sample -10-9 -8 -7 -6 -5 -4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 Next.min(x) 5. so we can be quite conﬁdent that our random sample was actually generated by the distribution. which in statistics corresponds to the degrees of freedom. size=1000) Here.6 cdf 0.sum()]) >>> p2 = np.mean(x).min() # equivalent to np. Since we did not specify the keyword arguments loc and scale.seed(282629734) >>> x = stats. p2*n_sample) >>> print ’chisquare for normdiscrete: chi2 = %6.10.max(x). e.sum()]) >>> ch2. 1.chisquare(f2.4 0. Using size=100 means that our sample consists of 1000 independently drawn (pseudo) random numbers. We combine the tail bins into larger bins so that they contain enough observations. probs[-5:]. >>> f2 = np.hstack([f[:5]. We set a seed so that in each run we get identical results to look at.rvs(10. Release 0. np.var() # equivalent to np.0 0.3f pvalue = %6. >>> print x.

sm. the expectation of the standard t-distribution.6955 The Kolmogorov-Smirnov test can be used to test the hypothesis that the sample comes from the standard t-distribution >>> print ’KS-statistic D = %6. we don’t know what the underlying distribution is.028 pvalue = 0.)) KS-statistic D = 0. then we also cannot reject the hypothesis that our sample was generated by the normal distribution given that in this example the p-value is almost 40%. variance = 1. s.4f’ % t-statistic = 0.032 pvalue = 0.4f’ >>> print sstr %(m.2165. distribution: >>> sstr = ’mean = %6. In real applications. >>> d.kstest(x. For our sample the sample statistics differ a by a small amount from their theoretical counterparts.4f’ % stats.describe(x) >>> print ’distribution:’. and so it does: >>> tt = (sm-m)/np.0141.sqrt(sv/float(n)) # t-statistic for mean >>> pval = stats. this means that with an alpha error of.k) mean = 0.3f pvalue = %6. sample: >>> print sstr %(sm. which should give us the same answer. >>> print ’KS-statistic D = %6.kstest((x-x.4f.ttest_1samp(x. T-test and KS-test We can use the t-test to test whether the mean of our sample differs in a statistcally signiﬁcant way from the theoretical expectation. Statistics (scipy. we can calculate our ttest also directly without using the provided function. pval = stats. sk = stats.4f’ % (d.3f pvalue = %6.0556 Note: stats. k = stats.4f’ % stats.29. moments=’mvsk’) >>> n. we cannot reject the hypothesis that the sample mean is equal to zero. skew = 0. m) The pvalue is 0. sv.3f pvalue = %6. sk) mean = 0. pval) t-statistic = 0. If we perform the Kolmogorov-Smirnov test of our sample against the standard normal distribution. pval) KS-statistic D = 0.0rc1 How do the some sample properties compare to their theoretical counterparts? >>> m.’norm’) KS-statistic D = 0.4f’ % (tt.4f. ss.t. variance = 1.sf(np. Release 0. skew = %6.391 pvalue = 0. then the p-value is again large enough that we cannot reject the hypothesis that the sample came form the normal distribution. v.3f pvalue = %6.2903.4f.016 pvalue = 0.std().stats(10. ss.t.stats) 69 . kurtosis = 1. kurtosis = %6.2402 1.391 pvalue = 0. variance = %6. skew = 0.0000.0000 >>> print ’sample: ’.11. while our sample has a variance of 1. ’t’. 10%.abs(tt). >>> print ’t-statistic = %6. ’norm’) >>> print ’KS-statistic D = %6.9606 Again the p-value is high enough that we cannot reject the hypothesis that the random sample really is distributed according to the t-distribution.3f pvalue = %6. v.2500.7.describe uses the unbiased estimator for the variance. kurtosis = 1. smax). sv.var is the biased estimator. s . n-1)*2 # two-sided pvalue = Prob(abs(t)>tt) >>> print ’t-statistic = %6.0000.3949 However. (10. As an exercise.kstest(x.mean())/x. while np.6955 stats. for example. If we standardize our sample and test it against the normal distribution.SciPy Reference Guide. the standard normal distribution has a variance of 1.10. (smin.

t.4f %8. is not correct. In this case the empirical frequency is quite close to the theoretical probability. of normal at 1%%. 1-0.05. and the distribution of the test statistic on which the p-value is based.01.4f’% \ . 70 Chapter 1.8000 10. size=10000) > crit05) / 10000. tprob*n_sample) >>> nch. the observed frequencies differ signiﬁcantly from the probabilites of the hypothesized distribution. or. we can again redo the test taking the estimate for scale and location into account.4f %8.cdf(crit)) >>> tch. Since the variance of our sample differs from both standard distribution.norm.4f’% tuple(stats. 1-0. our sample has more weight in the top tail than the underlying distribution. 10) >>> print crit [ -Inf -2. crit05.10.0000 We see that the standard normal distribution is clearly rejected while the standard t-distribution cannot be rejected.4000 5.norm. 1-0.1. 0. since in the last case we estimated mean and variance. crit05. crit10 = stats. 5% and 10% tail 100 100 100 and 10%% tail %8.0.diff(quantiles) >>> nprob = np. Release 0. 1-0.ppf(quantiles. 0.300 pvalue = 0.8901 >>> print ’chisquare for normal: chi2 = %6.4f’% freq05l larger sample %-frequency at 5% tail 4. tuple(stats.10. of normal at 1%.sum(x>crit05) / float(n) * >>> freq10 = np. We can use the percent point function ppf.ppf([1-0. 10) >>> print ’critical values from ppf at 1%%. 5%% and 10%% %8.5000 In all three cases.4f’% (freq01.chisquare(freqcount.2857 3.0] >>> crit = stats.605 pvalue = 0.t.isf([0.histogram(x.t. SciPy Tutorial .3f pvalue = %6.37218364 1.76376946 -1. more directly.4f %8. 5%% and 10%% %8. to obtain the critical values. npval) chisquare for normal: chi2 = 64.8125 1. 1-0. crit05. >>> freq05l = np. critical values from isf at 1%. this assumption is violated.4f’ % (tch. crit10) critical values from ppf at 1%.4f %8.7638 1.5003 The chisquare test can be used to test.4957 8. we can use the inverse of the survival function >>> crit01. 5%% sample %-frequency at 1%..7638 1. tpval) chisquare for t: chi2 = 2.4f’ % (nch. freq10) 1.sum(stats. 5% and 10% 0.01.76376946 Inf] >>> n_sample = x. freq05.. bins=crit)[0] >>> tprob = np. but if we repeat this several times the ﬂuctuations are still pretty large. 5% and 10% 2. 1.05. we can check the upper tail of the distribution.81246112 -1.05. which is the inverse of the cdf function.4f %8.t.8000 We can also compare it with the tail of the normal distribution.37218364 1.10]. crit10])*100) tail prob. which has less weight in the tails: >>> print ’tail prob.rvs(10. 5% and 10% 2.81246112 2.01.3f pvalue = %6.SciPy Reference Guide. npval = stats.0rc1 Note: The Kolmogorov-Smirnov test assumes that we test against a distribution with given parameters.3722 >>> freq01 = np.diff(stats.sum(x>crit10) / float(n) * >>> print ’sample %%-frequency at 1%%. >>> quantiles = [0. tpval = stats. whether for a ﬁnite number of bins. 0.4f %8.4f’% (crit01.01. nprob*n_sample) >>> print ’chisquare for t: chi2 = %6.4f %8.sf([crit01.sum(x>crit01) / float(n) * >>> freq05 = np.size >>> freqcount = np. 5%% and 10%% %8. Tails of the distribution Finally.3722 >>> print ’critical values from isf at 1%%.8125 1. We can brieﬂy check a larger sample to see if we get a closer match.0 * 100 >>> print ’larger sample %%-frequency at 5%% tail %8.chisquare(freqcount.4f %8.

tprob*n_sample) >>> nch. tloc.3f pvalue = %6.4f’ % \ .3f pvalue = %6. nscale = stats.757 pvalue = 0. loc=nloc.0054 >>> print ’normal kurtosistest teststat = %6.3f pvalue = %6.std()) normaltest teststat = 30.cdf(crit.SciPy Reference Guide.7361 When testing for normality of a small sample of t-distributed observations and a large sample of normal distributed observation. but again. scale=nscale)) >>> tch. Special tests for normal distributions Since the normal distribution is the most common distribution in statistics.normaltest((x-x.4f’ % (nch. stats.0000 In all three tests the p-values are very low and we can reject the hypothesis that the our sample has skew and kurtosis of the normal distribution.rvs(size=1000)) normaltest teststat = 0.stats) 71 .0rc1 The ﬁt method of the distributions can be used to estimate the parameters of the distribution.4f’ % (tch.084 pvalue = 0. we get exactly the same results if we test the standardized sample: >>> print ’normaltest teststat = %6. with a p-value of 0.normaltest(x) normaltest teststat = 30.9542 >>> print ’chisquare for normal: chi2 = %6. tdof.613 pvalue = 0.698 pvalue = 0. tscale = stats. npval) chisquare for normal: chi2 = 11.379 pvalue = 0.379 pvalue = 0.4f’ % stats.3f pvalue = %6.t.diff(stats. tpval = stats.3f pvalue = %6. In the ﬁrst case this is because the test is not powerful enough to distinguish a t and a normally distributed random variable in a small sample. and the test is repeated using probabilites of the estimated distribution.0955 >>> print ’normaltest teststat = %6.chisquare(freqcount.diff(stats. loc=tloc. scale=tscale)) >>> nprob = np.norm.0858 Taking account of the estimated parameters..10. we can check whether the normaltest gives reasonable results for other cases: >>> print ’normaltest teststat = %6. 1.3f pvalue = %6.4f’ % stats.kurtosistest(x) normal kurtosistest teststat = 4. we can still reject the hypothesis that our sample came from a normal distribution (at the 5% level).t.0000 These two tests are combined in the normality test >>> print ’normaltest teststat = %6.4f’ % stats. then in neither case can we reject the null hypothesis that the sample comes from a normal distribution. tpval) chisquare for t: chi2 = 1.3f pvalue = %6.norm.4f’ % stats..785 pvalue = 0.normaltest(stats. there are several additional functions available to test whether a sample could have been drawn from a normal distribution First we can test if skew and kurtosis of our sample differ signiﬁcantly from those of a normal distribution: >>> print ’normal skewtest teststat = %6. Release 0.t.3f pvalue = %6.4f’ % stats. nprob*n_sample) >>> print ’chisquare for t: chi2 = %6.fit(x) >>> nloc.95.11.577 pvalue = 0. we cannot reject the t distribution. Statistics (scipy.cdf(crit. size=100)) normaltest teststat = 4.norm.normaltest(stats. >>> tdof.0000 Because normality is rejected so strongly.skewtest(x) normal skewtest teststat = 2. Since skew and kurtosis of our sample are based on central moments.fit(x) >>> tprob = np. npval = stats.rvs(10.mean())/x.chisquare(freqcount.

rvs(loc=8.4 Comparing two samples In the following.11399999999999999. using the output argument is more efﬁcient. 0.rvs(loc=5. with different location. we cannot reject the null hypothesis since the pvalue is high >>> stats. 0.54890361750888583.1 Introduction Image processing and analysis are generally seen as operations on two-dimensional arrays of values.99541195173064878) In the second example. Good examples of these are medical imaging and biological imaging. scale=10. rvs2) (-0. and we want to test whether these samples have the same statistical properties.10. rvs2) (0. binary morphology. 72 Chapter 1. In this case the result is not returned. SciPy Tutorial . Comparing means Test with sample with identical means: >>> rvs1 = stats.ks_2samp(rvs1.12 Multi-dimensional image processing (scipy. Usually. scale=10.SciPy Reference Guide. 6. scale=10.507128186505895e-006) Kolmogorov-Smirnov test for two samples ks_2samp For the example where both samples are drawn from the same distribution. size=500) >>> stats. There are however a number of ﬁelds where images of higher dimensionality must be analyzed.norm.e. size=500) >>> stats. we are given two samples.12.rvs(loc=5.5831943748663857) Test with sample with different means: >>> rvs3 = stats.0rc1 1.0027132103661283141) 1.12.ttest_ind(rvs1. The packages currently includes functions for linear and non-linear ﬁltering. rvs3) (0. means. 1.ttest_ind(rvs1. all functions allow the speciﬁcation of an output array with the output argument.11. size=500) >>> rvs2 = stats.ndimage) 1. we can reject the null hypothesis since the pvalue is below 1% >>> stats. which can come either from the same or from different distribution.025999999999999995. Release 0. B-spline interpolation. Notably.2 Properties shared by all functions All functions share some common properties. rvs3) (-4. and object measurements. since an existing array is used to store the result. numpy is suited very well for this type of applications due its inherent multidimensional nature.norm. The scipy. i.5334142901750321.ndimage packages provides a number of general image processing and analysis functions that are designed to operate with arrays of arbitrary dimensionality. 0. With this argument you can specify an array that will be changed in-place with the result with the operation.ks_2samp(rvs1.norm.

the output argument is used. 1. 1. [0. 2.3 Filter functions The functions described in this section all perform some type of spatial ﬁltering of the the input array: the elements in the output are some function of the values in the neighborhood of the corresponding input element. [0. 13. 0] >>> correlate1d(a. . 1]) array([ 0 1 0 0 -1 0 0]) However. For instance. Many of the functions described below allow you to deﬁne the footprint of the kernel.1. by passing a mask through the footprint parameter. 16. 1]) array([0. 0. 13. 6. [1. 0. [-1. 0. origin = -1) array([ 0 1 0 0 -1 0 0]) # backward difference # forward difference We could also have calculated the forward difference as follows: >>> correlate1d(a. 1].1]. which is often rectangular in shape but may also have an arbitrary footprint. 1. 30. 20. 2. 1. 23. 1.1.arange(10). 27. 0. 9. 0]]) Usually the origin of the kernel is at the center calculated by dividing the dimensions of the kernel shape by two. If no output argument is given. 2. but it is in most cases equal to the type of the input. 1.10. 0.arange(10). 0. the type of the result is equal to the type of the speciﬁed output argument. For example: >>> a = [0. 0. 0. 23. 27. 0. in which case the origin is assumed to be equal along all axes. [-1.0]. or a sequence giving the origin along each axis. 0] >>> correlate1d(a.[0. Take for example the correlation of a one-dimensional array with a ﬁlter of length 3 consisting of ones: >>> a = [0.5. For example: >>> correlate(np. A good example is the calculation of backward and forward differences: >>> a = [0. 20. 1. -1. 16. For example a cross shaped kernel can be deﬁned as follows: >>> footprint >>> footprint array([[0. This feature will not be needed very often. Release 0. 1].12. 1.5.5. 2. We refer to this neighborhood of elements as the ﬁlter kernel. [1.5.0]]) 0]. This is done by simply assigning the desired numpy type object to the output argument.5]) 1. 0]) Sometimes it is convenient to choose a different origin for the kernel.5]) array([ 0.ndimage) 73 .12. 1. 1]. the origin of a one-dimensional kernel of length three is at the second element. 1. [1. 0. 6. origin = -1) array([0 1 1 1 0 0 0]) The effect is a shift of the result towards the left. Multi-dimensional image processing (scipy. it is still possible to specify what the result of the output should be. [1. 1. but it may be useful especially for ﬁlters that have an even size. 1.5]. . 0. output=np. For multi-dimensional kernels origin can be a number. 0. [1.1. 30]) >>> correlate(np. 1]) array([ 0 0 1 0 0 -1 0]) >>> correlate1d(a. 9.0rc1 The type of arrays returned is dependent on the type of operation. .float64) array([ 0.[1. using the origin parameter instead of a larger kernel is more efﬁcient. . however. 1. If. 0] >>> correlate1d(a.SciPy Reference Guide. For this reason most functions support the origin parameter which gives the origin of the ﬁlter relative to its center. 1. = array([[0. .

For large arrays and large ﬁlter kernels. or 3 corresponds to convolution with the ﬁrst. The order parameter must be a number. this would be very memory consuming. The order of the ﬁlter can be speciﬁed separately for each axis. or a sequence of numbers to specify a different order for each axis. second or third derivatives of a Gaussian. In the functions described below. The lines of the array along the given axis are convoluted with the given weights. Note: The easiest way to implement such boundary conditions would be to copy the data to a larger array and extend the data at the borders according to the boundary conditions. Smoothing ﬁlters The gaussian_filter1d function implements a one-dimensional Gaussian ﬁlter. The lines of the array along the given axis are correlated with the given weights. The weights parameter must be a one-dimensional sequences of numbers. to specify the same order for all axes. or 3 corresponds to convolution with the ﬁrst. The standarddeviation of the Gaussian ﬁlter is passed through the parameter sigma.0rc1 Since the output elements are a function of elements in the neighborhood of the input elements. second or third derivatives of a Gaussian. Release 0. The weights parameter must be a one-dimensional sequences of numbers. SciPy Tutorial . the origin parameter behaves differently than in the case of a correlation: the results is shifted in the opposite direction. As a result. An order of 1. the borders of the array need to be dealt with appropriately by providing the values outside the borders. Correlation and convolution The correlate1d function calculates a one-dimensional correlation along the given axis. 2. Note: A convolution is essentially a correlation after mirroring the kernel. An order of 1. An order of 0 corresponds to convolution with a Gaussian kernel.SciPy Reference Guide. the origin parameter behaves differently than in the case of a correlation: the result is shifted in the opposite directions. The function convolve implements multi-dimensional convolution of the input array with a given kernel. If sigma is not a sequence but a single number. Higher order derivatives are not implemented. Following boundary conditions are currently supported: “nearest” “wrap” “reﬂect” “constant” Use the value at the boundary Periodically replicate the array Reﬂect the array at the boundary Use a constant value. The standarddeviations of the Gaussian ﬁlter along each axis are passed through the parameter sigma as a sequence or numbers. The intermediate arrays are stored in the same data type as the output. Note: The multi-dimensional ﬁlter is implemented as a sequence of one-dimensional Gaussian ﬁlters. Higher order derivatives are not implemented. the standard deviation of the ﬁlter is equal along all directions. Note: A convolution is essentially a correlation after mirroring the kernel. 2. the boundary conditions can be selected using the mode parameter which must be a string with the name of the boundary condition. default is 0. This is done by assuming that the arrays are extended beyond their boundaries according certain boundary conditions. for output types with a 74 Chapter 1. and the functions described below therefore use a different approach that does not require allocating large temporary buffers. The convolve1d function calculates a one-dimensional convolution along the given axis. As a result. Therefore. Setting order = 0 corresponds to convolution with a Gaussian kernel. The gaussian_filter function implements a multi-dimensional Gaussian ﬁlter. The function correlate implements multi-dimensional correlation of the input array with a given kernel.10.0 [1 2 3]->[1 1 2 3 3] [1 2 3]->[3 1 2 3 1] [1 2 3]->[1 1 2 3 3] [1 2 3]->[0 1 2 3 0] The “constant” mode is special since it needs an additional parameter to specify the constant value that should be used.

The size parameter. i. The percentile may be less then zero. Either the sizes of a rectangular kernel or the footprint of the kernel must be provided.. for output types with a lower precision. The percentile_filter function calculates a multi-dimensional percentile ﬁlter. if provided. The size parameter. Note: The multi-dimensional ﬁlter is implemented as a sequence of one-dimensional uniform ﬁlters.12. must be a sequence of sizes or a single number in which case the size of the ﬁlter is assumed to be equal along each axis. if provided. The footprint.e. must be an array that deﬁnes the shape of the kernel by its non-zero elements. The size parameter.10. must be a sequence of sizes or a single number in which case the size of the ﬁlter is assumed to be equal along each axis. the results may be imprecise because intermediate results may be stored with insufﬁcient precision. if provided. must be a sequence of sizes or a single number in which case the size of the ﬁlter is assumed to be equal along each axis. The footprint if provided. if provided. percentile = -20 equals percentile = 80. The rank_filter function calculates a multi-dimensional rank ﬁlter. This can be prevented by specifying a more precise output type.e. Either the sizes of a rectangular kernel or the footprint of the kernel must be provided. the results may be imprecise because intermediate results may be stored with insufﬁcient precision. if provided. Multi-dimensional image processing (scipy. Release 0. the sizes along all axis are assumed to be equal. The maximum_filter1d function calculates a one-dimensional maximum ﬁlter of given size along the given axis.. if provided.0rc1 lower precision. Either the sizes of a rectangular kernel or the footprint of the kernel must be provided. must be a sequence of sizes or a single number in which case the size of the ﬁlter is assumed to be equal along each axis. but a single number. Therefore. The size parameter.ndimage) 75 . The function gaussian_filter1d described in Smoothing ﬁlters can be used to calculate derivatives along a given axis using the order parameter. The footprint. Either the sizes of a rectangular kernel or the footprint of the kernel must be provided. The rank may be less then zero. The median_filter function calculates a multi-dimensional median ﬁlter. if provided. The intermediate arrays are stored in the same data type as the output. The size parameter. must be an array that deﬁnes the shape of the kernel by its non-zero elements. The maximum_filter function calculates a multi-dimensional maximum ﬁlter. The footprint. The uniform_filter1d function calculates a one-dimensional uniform ﬁlter of the given size along the given axis. The sizes of the uniform ﬁlter are given for each axis as a sequence of integers by the size parameter. Other derivative ﬁlters are the 1. This can be prevented by specifying a more precise output type. The footprint.SciPy Reference Guide. The minimum_filter function calculates a multi-dimensional minimum ﬁlter. must be an array that deﬁnes the shape of the kernel by its non-zero elements. i. must be a sequence of sizes or a single number in which case the size of the ﬁlter is assumed to be equal along each axis. The uniform_filter implements a multi-dimensional uniform ﬁlter. Either the sizes of a rectangular kernel or the footprint of the kernel must be provided. rank = -1 indicates the largest element. must be an array that deﬁnes the shape of the kernel by its non-zero elements. must be an array that deﬁnes the shape of the kernel by its non-zero elements. Derivatives Derivative ﬁlters can be constructed in several ways. if provided. if provided. Filters based on order statistics The minimum_filter1d function calculates a one-dimensional minimum ﬁlter of given size along the given axis. If size is not a sequence.

Release 0..]. 0. 0) . 5)) >>> a[2.. 0.. mode. [1... The sobel function calculates a derivative along the given axis. output. The function derivative2 should have the following signature: derivative2(input. 0. 0. >>> a = zeros((5.. 0. [ 0.. [ 0.. 0.]. 1.10.. 1.. 0. -4.. 5)) >>> a[2. 0.... extra_keywords = {’weights’: [1. axis. 1].. mode. 0. >>> a = zeros((5.. -2. mode.. cval): .. 0. 2] = 1 >>> generic_laplace(a. 0. 0. 0. 0. 0. 1]. 0. return correlate1d(input..]. 1.. 0. 0.. axis... cval. 0. SciPy Tutorial .. 0. 0.. cval. 1. 0.. For example: >>> def d2(input. [ 0.0rc1 Prewitt and Sobel ﬁlters: The prewitt function calculates a derivative along the given axis. 0. weights): . otherwise it should return the result. Thus.. -2. [ 0. mode. 1.. 0.]. 1.. axis... 0..].. 0. 0.. 0. 0. 2] = 1 >>> generic_laplace(a. axis. 0.]. [ 0. cval have the usual meaning. cval.]]) or: >>> generic_laplace(a. [ 0. mode.]. 0. d2. The Laplace ﬁlter is calculated by the sum of the second derivatives along all axes. return correlate1d(input. 0. d2) array([[ 0. 0. **extra_keywords) It should calculate the second derivative along the dimension axis... extra_arguments = ([1. output.]. output.. -4.SciPy Reference Guide.. axis. cval. 0.. [ 0. -4... [ 0. different Laplace ﬁlters can be constructed using different second derivative functions.. 1.]. The extra_arguments and extra_keywords arguments can be used to pass a tuple of extra arguments and a dictionary of named arguments that are passed to derivative2 at each call.]. 0. 0... 1.. 0. mode.. 1.) . 0.. 0.]]) To demonstrate the use of the extra_arguments argument we could do: >>> def d2(input.. If output is not None it should use that for the output and return None. d2.. [ 0. 1..]]) The following two functions are implemented using generic_laplace by providing appropriate functions for the second derivative function: 76 Chapter 1.. 1.... 0.. [ 0. [ 0.. 0.]... 1.. 0. -2. Therefore we provide a general function that takes a function argument to calculate the second derivative along a given direction and to construct the Laplace ﬁlter: The function generic_laplace calculates a laplace ﬁlter using the function passed through derivative2 to calculate second derivatives. 0. 0. 0. [ 0.. output. 1]}) array([[ 0.... *extra_arguments....]. weights..)) array([[ 0. 0.. output.

The generic_filter1d function iterates over the lines of an array and calls function at each line. .0rc1 The function laplace calculates the Laplace using discrete differentiation for the second derivative (i. otherwise it should return the result. 2.41421356. 1. The sobel and prewitt functions ﬁt the required signature and can therefore directly be used with generic_gradient_magnitude. 0. 0. 0. . 2. If sigma is not a sequence but a single number. . Similar to the generic Laplace function there is a generic_gradient_magnitude function that calculated the gradient magnitude of an array: The function generic_gradient_magnitude calculates a gradient magnitude using the function passed through derivative to calculate ﬁrst derivatives. 2. ]. If sigma is not a sequence but a single number. *extra_arguments. generic functions can be used that accept a callable object that implements the ﬁltering operation. output.10. ]. 0. the standard deviation of the ﬁlter is equal along all directions. . 0. . The standard-deviations of the Gaussian ﬁlter along each axis are passed through the parameter sigma as a sequence or numbers.41421356. Generic ﬁlter functions To implement ﬁlter functions. . The 1. If output is not None it should use that for the output and return None. The callback function can also be written in C and passed using a PyCObject (see Extending ndimage in C for more information). the standard deviation of the ﬁlter is equal along all directions. ]. 1]).e. 2. . For example. 0. . 2] = 1 >>> generic_gradient_magnitude(a. 0. The extra_arguments and extra_keywords arguments can be used to pass a tuple of extra arguments and a dictionary of named arguments that are passed to derivative at each call. Release 0. The function gaussian_laplace calculates the Laplace using gaussian_filter to calculate the second derivatives. 0. 0.SciPy Reference Guide. Only a callable object implementing a callback function that does the actual ﬁltering work must be provided. . The arguments that are passed to function are one-dimensional arrays of the tFloat64 type. sobel) array([[ 0. . 0. [ 0. The standard-deviations of the Gaussian ﬁlter along each axis are passed through the parameter sigma as a sequence or numbers. . ]]) See the documentation of generic_laplace for examples of using the extra_arguments and extra_keywords arguments. 1.12. The generic_filter1d function implements a generic one-dimensional ﬁlter function. [ 0. . 0. mode. convolution with [1. 0. . 5)) >>> a[2.ndimage) 77 . cval have the usual meaning. 1. axis. The iteration over the input and output arrays is handled by these generic functions. The gradient magnitude is deﬁned as the square root of the sum of the squares of the gradients in all directions. . 1. the sobel function ﬁts the required signature: >>> a = zeros((5. along with such details as the implementation of the boundary conditions. mode. [ 0. . **extra_keywords) It should calculate the derivative along the dimension axis. -2.41421356. ]. where the actual ﬁltering operation must be supplied as a python function (or other callable object). Multi-dimensional image processing (scipy. The function derivative should have the following signature: derivative(input. The following function implements the gradient magnitude using Gaussian derivatives: The function gaussian_gradient_magnitude calculates the gradient magnitude using gaussian_filter to calculate the ﬁrst derivatives. cval. . [ 0.41421356.

oline[. 62. according to the ﬁlter_size and origin arguments.4) >>> correlate(a. 62. 3) array([[ 3.0rc1 ﬁrst contains the values of the current line. 8.. The generic_filter function iterates over the array and calls function at each element. we can pass the parameters of our ﬁlter as an argument: >>> def fnc(iline. 8. 39]]) The same operation can be implemented using generic_ﬁlter as follows: 78 Chapter 1. 32. 56. ’b’:3}) array([[ 3. [51. [[1. each input line was extended by one value at the beginning and at the end. It is extended at the beginning end the end. [27. that contains the values around the current element that are within the footprint of the ﬁlter. 3.. 32. For example consider a correlation: >>> a = arange(12). 3. 41]. Therefore. 8. 38. [27. 2. 32. 14. For example. 3]) array([[ 3. [51. 56. SciPy Tutorial . Optionally extra arguments can be deﬁned and passed to the ﬁlter function. 41]. 38. 31.10. 23].. [28. 8. a. 3. 17].. oline. 32. 65]]) or: >>> generic_filter1d(a..] = iline[:-2] + a * iline[1:-1] + b * iline[2:] . extra_arguments = (2.] = iline[:-2] + 2 * iline[1:-1] + 3 * iline[2:] . 17]. [27. 17]. 0]. 3)) array([[ 3. [27. 56.SciPy Reference Guide. 14...reshape(3. For example consider a correlation along one dimension: >>> a = arange(12). fnc. 62. The argument of function is a onedimensional array of the tFloat64 type. 11]. 14. 38. 56.4) >>> correlate1d(a. [51. [51... [12. 65]]) The generic_filter function implements a generic ﬁlter function. 35. extra_keywords = {’a’:2. 7. before the function was called..reshape(3. [1. [0. Release 0. fnc. 15. 17].. 62. 65]]) Here the origin of the kernel was (by default) assumed to be in the middle of the ﬁlter of length 3. The function should return a single value that can be converted to a double precision number. 19. 41]. 38. where the actual ﬁltering operation must be supplied as a python function (or other callable object). b): . The second array should be modiﬁed in-place to provide the output values of the line. 65]]) The same operation can be implemented using generic_filter1d as follows: >>> def fnc(iline. oline): . fnc. The extra_arguments and extra_keywords arguments can be used to pass a tuple of extra arguments and/or a dictionary of named arguments that are passed to derivative at each call. 14. 41]. oline[. 3]]) array([[ 0.. >>> generic_filter1d(a. >>> generic_filter1d(a.

# calculate the next coordinates: .0rc1 >>> def fnc(buffer): . [12. weights): . It performs the same ﬁlter operation as described above for generic_filter. This order of iteration is guaranteed for the case that it is important to adapt the ﬁlter depending on spatial location. which was multiplied with the proper weights and the result summed... extra_keywords= {’weights’: [1. [0. self. 23]. 39]]) These functions iterate over the lines or elements starting at the last axis. return (buffer * array([1.. 3])). # initialize the coordinates: ... 31. [12 15 19 23].10. [0. must be a sequence of sizes or a single number in which case the size of the ﬁlter is assumed to be equal along each axis. 19... [12. 11]. 3. 39]]) or: >>> generic_filter(a.. extra_arguments = ([1. .shape = shape . axes. self. 0].sum() .reverse() . [28.sum() .. 1]]..4) >>> >>> class fnc_class: . footprint = [[1. Optionally extra arguments can be deﬁned and passed to the ﬁlter function.reshape(3.. footprint = [[1. i.. the last index changes the fastest. 19.e.. When calling generic_filter. 15. 11]. 7. 1]].ndimage) 79 . 3. 0]. The footprint... Therefore the ﬁlter function receives a buffer of length equal to two. 31. return (buffer * weights). >>> generic_filter(a. for jj in axes: 1. buffer): .12. must be an array that deﬁnes the shape of the kernel by its non-zero elements... Here is an example of using a class that implements the ﬁlter and keeps track of the current coordinates while iterating. fnc... The size parameter.sum() . 1]]) array([[ 0 3 7 11].. 35. Release 0. 35. 3].. if provided.... fnc. For example. def __init__(self. if provided. we can pass the parameters of our ﬁlter as an argument: >>> def fnc(buffer. result = (buffer * array([1. 3])). weights = asarray(weights) . either the sizes of a rectangular kernel or the footprint of the kernel must be provided.. 0].SciPy Reference Guide.. # store the shape: . Multi-dimensional image processing (scipy.... [28. shape): . but additionally prints the current coordinates: >>> a = arange(12). 7. 23].coordinates = [0] * len(shape) .. [0. 15.)) array([[ 0. def filter(self.... footprint = [[1..shape)) . The extra_arguments and extra_keywords arguments can be used to pass a tuple of extra arguments and/or a dictionary of named arguments that are passed to derivative at each call. axes = range(len(self..coordinates . >>> generic_filter(a. fnc.. print self. 3]} array([[ 0. [28 31 35 39]]) Here a kernel footprint was speciﬁed that contains only two elements.

shape)) # skip the filter axis: del axes[self.. 3. . 15. 1] [2. self..reverse() for jj in axes: if self.filter. 7.filter...shape[jj] ...] = iline[:-2] + 2 * iline[1:-1] + 3 * iline[2:] print self.... if self......coordinates[jj] < self.. 2] [2. . [1. . break . >>> fnc = fnc_class(shape = (3. 39]]) For the generic_filter1d function the same approach works. 3) 0] 0] 80 Chapter 1....coordinates[jj] = 0 fnc = fnc1d_class(shape = (3.. 2] [1.4) class fnc1d_class: def __init__(self.. 0] [0. 19.shape[jj] .. ... 3] [2.coordinates[jj] += 1 .. fnc.. iline.coordinates[jj] += 1 break else: self. a = arange(12)... 1] [0.. [28.reshape(3. 3] [1..axis] axes..coordinates[jj] < self. . [0. 0] [2. 23]. The example for generic_filter1d then becomes this: >>> >>> >>> .... . 31. . axis = -1): # store the filter axis: self. Release 0. self... . 3] array([[ 0....coordinates = [0] * len(shape) def filter(self.. footprint = [[1. 35.. ..coordinates[jj] = 0 .. ..1: . except that this function does not iterate over the axis that is being ﬁltered. .1: self. >>> >>> [0.4)) generic_filter1d(a. 11].. .axis = axis # store the shape: self.0rc1 ... .. shape.4)) >>> generic_filter(a.. .. SciPy Tutorial .... .... . return result .10.coordinates # calculate the next coordinates: axes = range(len(self. . . . . fnc. ..shape = shape # initialize the coordinates: self. oline): oline[... [12.. 0]. .. 1] [1..SciPy Reference Guide. 0] [1. 1]]) [0. else: .. 2] [0.

This function is only implemented for dimensions 1. or a single value for all dimensions. The size parameter is a sequences of values for each dimension. pp. 65]]) Fourier domain ﬁlters The functions described in this section perform ﬁltering operations in the Fourier domain. The sigma parameter is a sequences of values for each dimension. or a single value for all dimensions. We therefore have to deal with arrays that may be the result of a real or a complex Fourier transform. or a single value for all dimensions.4 Interpolation functions This section describes various interpolation functions that are based on B-spline theory. The shift parameter is a sequences of shifts for each dimension.ndimage) 81 . Thus. The fourier_ellipsoid function multiplies the input array with the multi-dimensional Fourier transform of a elliptically shaped ﬁlter with given sizes size. In the case of a real Fourier transform only half of the of the symmetric complex transform is stored. November 1999. The functions described here provide a parameter n that in the case of a real transform must be equal to the length of the real transform axis before transformation. or a single value for all dimensions. it needs to be known what the length of the axis was that was transformed by the real fft.” IEEE Signal Processing Magazine. 2. 62. In this case it is more efﬁcient to do the pre-ﬁltering only once and use a preﬁltered array as the input of the interpolation functions. The fourier_shift function multiplies the input array with the multi-dimensional Fourier transform of a shift operation for the given shift. If this parameter is less than zero. The fourier_gaussian function multiplies the input array with the multi-dimensional Fourier transform of a Gaussian ﬁlter with given standard-deviations sigma. The fourier_uniform function multiplies the input array with the multi-dimensional Fourier transform of a uniform ﬁlter with given sizes size.12.fft module. the input array of such a function should be compatible with an inverse Fourier transform function. Release 0.10. This is useful if more than one interpolation operation is done on the same array. A good introduction to Bsplines can be found in: M. The order of the spline must be larger then 1 and less than 6. such as the functions from the numpy. 0] array([[ 3. [27. and 3. it is assumed that the input array was the result of a complex Fourier transform. 8. 16. 1. Therefore. The parameter axis can be used to indicate along which axis the real transform was executed. 6. 17]. Unser. Multi-dimensional image processing (scipy. An output array can optionally be provided. The interpolation functions described in section Interpolation functions apply pre-ﬁltering by calling spline_filter. 22-38. 14. [51. if an output with a limited 1. 56.0rc1 [2. vol.SciPy Reference Guide. “Splines: A Perfect Fit for Signal and Image Processing. no. 32. 41]. Additionally. Note: The multi-dimensional ﬁlter is implemented as a sequence of one-dimensional spline ﬁlters. The intermediate arrays are stored in the same data type as the output. The size parameter is a sequences of values for each dimension. Spline pre-ﬁlters Interpolation using splines of an order larger than 1 requires a pre.12. 38. The spline_filter function calculates a multi-dimensional spline ﬁlter. The following two functions implement the pre-ﬁltering: The spline_filter1d function calculates a one-dimensional spline ﬁlter along the given axis.ﬁltering step. but they can be instructed not to do this by setting the preﬁlter keyword equal to False.

. 1. This can be prevented by specifying a output type of high precision. [ 0. shift_func. and therefore the possibility arises that input values outside the boundaries are needed.0rc1 precision is requested. extra_keywords = {’s0’: 0. extra_arguments = (0. The parameter coordinates is used to ﬁnd for each point in the output the corresponding coordinates in the input.6375]]) or: >>> geometric_transform(a.5) . 1. . The function map_coordinates applies an arbitrary coordinate transformation using the given array of coordinates.5)) array([[ 0. .SciPy Reference Guide. we can pass the shifts in our example as arguments: >>> def shift_func(output_coordinates. The values of coordinates along the ﬁrst axis are the coordinates in the input array at which 82 Chapter 1. return (output_coordinates[0] . [ 0. 2. [ 0. Interpolation functions Following functions all employ spline interpolation to effect some type of geometric transformation of the input array. shift_func) array([[ 0.5.10. array([[ 0. 9.8125.. This requires a mapping of the output coordinates to the input coordinates.. >>> geometric_transform(a. .. The extra_arguments and extra_keywords arguments can be used to pass a tuple of extra arguments and/or a dictionary of named arguments that are passed to derivative at each call. 0. See Extending ndimage in C for more information.reshape(4.. .6375]]) Note: The mapping function can also be written in C and passed using a PyCObject. 4. 2.5. 4. 8. ].3625.5}) 0. >>> geometric_transform(a. 6.0. The given mapping function is called at each point in the output to ﬁnd the corresponding coordinates in the input. 9.3625. output_coordinates[1] .3625. shift_func. 2. If not given they are equal to the input shape and type.2625..3). [ 0.1875]. [ 0.8125. .1875]. .7375].s0.0. 0. [ 0. ]. 0. Therefore these functions all support a mode parameter that determines how the boundaries are handled. mapping must be a callable object that accepts a tuple of length equal to the output array rank and returns the corresponding input coordinates as a tuple of length equal to the input array rank. 4. The shape of the output is derived from that of the coordinate array by dropping the ﬁrst axis. Release 0. [ 0. output_coordinates[1] . and a cval parameter that gives a constant value in case that the ‘constant’ mode is used. For example: >>> a = arange(12). 0.5. 8. 8. . 0. the results may be imprecise because intermediate results may be stored with insufﬁcient precision. .2625.float64) >>> def shift_func(output_coordinates): .7375]. ]. SciPy Tutorial .. ’s1’: 0. .astype(np. 0. [ 0.. 6. s1): .8125. .6375]]) Optionally extra arguments can be deﬁned and passed to the ﬁlter function.. . 9. . return (output_coordinates[0] .7375]. This problem is solved in the same way as described in Filter functions for the multi-dimensional ﬁlter functions. . .2625. 6. The geometric_transform function applies an arbitrary geometric transform to the input. s0.s1) .1875]. 1. The output shape and output type can optionally be provided. [ 0. For example.

]. The origin parameter controls the placement of the structuring element as described in Filter functions.5.]]) >>> map_coordinates(a. If reshape is true. In the latter case.5. The given transformation matrix and offset are used to ﬁnd for each point in the output the corresponding coordinates in the input.].0rc1 the output value is found. the value of the input at these coordinates is determined by spline interpolation of the requested order. The border_value parameter gives the value of 1. Release 0..5) and (1.. True. it is assumed that the matrix is diagonal. [[0. 1. False]]. 1) array([[False. If not given they are equal to the input shape and type.5 Morphology Binary morphology Binary morphology (need something to put here). True]. 2): >>> a = arange(12). [False. True. The shift function returns a shifted version of the input. dtype=bool) >>> generate_binary_structure(2. two dimensional 4-connected and 8-connected structures are generated as follows: >>> generate_binary_structure(2.. dtype=bool) Most binary morphology functions can be expressed in terms of the basic operations erosion and dilation: The binary_erosion function implements binary erosion of arrays of arbitrary rank with the given structuring element. [ 9... 11. [ True. [ 3. 1]]) array([ 1. 2) array([[ True. The angle must be given in degrees.) Since the coordinates may be non.reshape(4.float64) >>> a array([[ 0. Here is an example that interpolates a 2D array at (0. [ True. ]) The affine_transform function applies an afﬁne transformation to the input array. an element with connectivity equal to one is generated using generate_binary_structure. [ True.. [ 6.10. 5. A more efﬁcient interpolation algorithm is then applied that exploits the separability of the problem.. True]. [0. The rank of the structure must be provided. 2. True.3). For instance. True].3625 7. The output shape and output type can optionally be provided. True]]. The generate_binary_structure functions generates a binary structuring element for use in binary morphology operations. 4. 8. 2].. using spline interpolation of the requested order. True.integer coordinates. False]. The size of the structure that is returned is equal to three in each direction. 0. using spline interpolation of the requested order. Multi-dimensional image processing (scipy. (See also the numarray coordinates function.]. The value of each element is equal to one if the square of the Euclidean distance from the element to the center is less or equal to connectivity. The value of the input at the calculated coordinates is determined by spline interpolation of the requested order.astype(np. The zoom function returns a rescaled version of the input. The transformation matrix must be two-dimensional or can also be given as a one-dimensional sequence or array. True. If no structuring element is provided.ndimage) 83 . then the size of the output array is adapted to contain the rotated input.12. 10.12. True. 1. 7.5. The rotate function returns the input array rotated in the plane deﬁned by the two axes given by the parameter axes.SciPy Reference Guide. using spline interpolation of the requested order.

0]. False. 1. False. True. True.1. False].1. True. 2. [False. Release 0. The origin parameter controls the placement of the structuring element as described in Filter functions. then it is also equal to 0 for the iterated structure. 0. False]]. If no structuring element is provided. If not. False. dtype=bool) >>> iterate_structure(struct. 84 Chapter 1. If a mask array is given. The dilation is repeated iterations times. 0. False]. [1. True. SciPy Tutorial . 0]. 0]. [False. False]. [1. The adapted origin is simply obtained by multiplying with the number of iterations. -1. [ True. 1. True. struct. 0. True. 0. 0. If a mask array is given. False]. [0. 1. [ True. True. 0]. [False. the dilation is repeated until the result does not change anymore. False.0. False. [False. -1) False. False. True. A function is provided that allows the calculation of a structure that is dilated a number of times with itself: The iterate_structure function returns a structure by dilation of the input structure iteration . False. True. 1.1. False. 1) >>> struct array([[False. Repeating an erosion or a dilation with a given structure n times is equivalent to an erosion or a dilation with a structure that is n-1 times dilated with itself. 1]. border_value=1) array([[ True. 0. False]. [False. True]. False].1 times with itself. If iterations is less than one. False].0]. [0. dtype=bool) If the origin of the original structure is equal to 0. 1. False]].0]. [ True. a. False. False. 0. True. True. True. False. True. True.0. [False. [0. 0. For instance: >>> struct = generate_binary_structure(2. the origin must also be adapted if the equivalent of the iterations erosions or dilations must be achieved with the iterated structure. False. only those elements with a true value at the corresponding mask element are modiﬁed at each iteration.0. dtype=bool) The binary_erosion and binary_dilation functions both have an iterations parameter which allows the erosion or dilation to be repeated a number of times. True. False]]. For convenience the iterate_structure also returns the adapted origin if the origin parameter is not None: >>> iterate_structure(struct. True. The binary_dilation function implements binary dilation of arrays of arbitrary rank with the given structuring element. True. The erosion is repeated iterations times.0rc1 the array outside boundaries. True. True].0. [ True. True]. the erosion is repeated until the result does not change anymore. an element with connectivity equal to one is generated using generate_binary_structure. True. False]. by repeatedly dilating an empty array from the border using the data array as the mask: >>> struct = array([[0. only those elements with a true value at the corresponding mask element are modiﬁed at each iteration. 1.0. [1. 2) array([[False.0. False]. 1. True.10. The border_value parameter gives the value of the array outside boundaries. True. [False.1. True. If iterations is less than one. False. False. [0.0. Here is an example of using binary_dilation to ﬁnd all elements that touch the border. 0]]) >>> a = array([[1. True. 0]]) >>> binary_dilation(zeros(a.0. [0.0]]) >>> a array([[1.0]. (array([[False.shape). True.SciPy Reference Guide. False]. False. True. False. [False.

The origin parameter controls the placement of the structuring element as described in Filter functions. Grey-scale opening and closing operations can be deﬁned similar to their binary counterparts: The grey_opening function implements grey-scale opening of arrays of arbitrary rank. True. followed by the logical and of these two erosions.scale dilation. if provided. must be an array that deﬁnes the shape of the kernel by its non-zero elements. must be a sequence of sizes or a single number in which case the size of the ﬁlter is assumed to be equal along each axis. The origin parameter controls the placement of the structuring element as described in Filter functions. Following functions provide a few of these operations for convenience: The binary_opening function implements binary opening of arrays of arbitrary rank with the given structuring element. an element with connectivity equal to one is generated using generate_binary_structure. if provided. The binary_closing function implements binary closing of arrays of arbitrary rank with the given structuring element. an element with connectivity equal to one is generated using generate_binary_structure. The origin parameter controls the placement of the structuring element as described in Filter functions. in which case the structuring element is assumed to be rectangular and ﬂat with the dimensions given by size. If no structuring element is provided.12. The footprint parameter. Multi-dimensional image processing (scipy. [-2. If no structuring element is provided. Below we describe the grey-scale equivalents of erosion. If origin2 equals None it is set equal to the origin1 parameter. it is set equal to the logical not of structure1. -2]) Other morphology operations can be deﬁned in terms of erosion and d dilation.10. 1. Grey-scale opening is equivalent to a grey-scale erosion followed by a grey-scale dilation. If this parameter is not given the structuring element is assumed to be ﬂat with a value equal to zero. The hit-or-miss transform is calculated by erosion of the input with the ﬁrst structure. and the handling of array borders. These operations are implemented in a similar fashion as the ﬁlters described in Filter functions. or by the size parameter if structure is not given. Binary closing is equivalent to a binary dilation followed by a binary erosion with the same structuring element. The grey-scale morphology operations optionally take a structure parameter that gives the values of the structuring element. if structure2 is not provided. a structuring element with connectivity equal to one is generated using generate_binary_structure. The iterations parameter gives the number of dilations that is performed followed by the same number of erosions. The binary_fill_holes function is used to close holes in objects in a binary image. If this parameter is not given. The origin parameters control the placement of the structuring elements as described in Filter functions. where the structure deﬁnes the connectivity of the holes. Release 0. opening and closing. If no structuring element is provided. The size parameter.SciPy Reference Guide.ndimage) 85 .scale erosion. False]]. The size parameter is only used if both structure and footprint are not given. False. False.0rc1 [False. If the ﬁrst structuring element is not provided. Grey-scale morphology Grey-scale morphology operations are the equivalents of binary morphology operations that operate on arrays with arbitrary values. with sizes equal to the dimensions of the structure array. dtype=bool). The iterations parameter gives the number of erosions that is performed followed by the same number of dilations. the structure is assumed to be rectangular. erosion of the logical not of the input with the second structure. The grey_dilation function calculates a multi-dimensional grey. and we refer to this section for the description of ﬁlter kernels and footprints. The shape of the structure can optionally be deﬁned by the footprint parameter. Binary opening is equivalent to a binary erosion followed by a binary dilation with the same structuring element. The binary_hit_or_miss function implements a binary hit-or-miss transform of arrays of arbitrary rank with the given structuring elements. an element with connectivity equal to one is generated using generate_binary_structure. Similar to binary erosion and dilation there are operations for grey-scale erosion and dilation: The grey_erosion function calculates a multi-dimensional grey. dilation.

by replacing each object element (deﬁned by values larger than zero) with the shortest distance to the background (all non-object elements). The following functions implement distance transforms for three different distance metrics: Euclidean. a structure is generated using generate_binary_structure with a squared distance equal to the rank of the array. The distances and indices arguments can be used to give optional output arrays that must be of the correct size and type (Float64 and Int32). The return_distances. In this case the index of the closest background element is returned along the ﬁrst axis of the result. and Chessboard distances. 1984. 86 Chapter 1.10. the feature transform can be calculated. In this case the index of the closest background element is returned along the ﬁrst axis of the result. These choices correspond to the common interpretations of the cityblock and the chessboard distancemetrics in two dimensions. The grey-scale morphological laplace is equal to the sum of a grey-scale dilation and a grey-scale erosion minus twice the input. and return_indices ﬂags can be used to indicate if the distance transform. “Distance transformations in arbitrary dimensions. In addition to the distance transform. Release 0. or a single number in which the sampling is assumed to be equal along all axes. by replacing each object element (deﬁned by values larger than zero) with the shortest euclidean distance to the background (all non-object elements).0rc1 The grey_closing function implements grey-scale closing of arrays of arbitrary rank.SciPy Reference Guide.”. Graphics. The distances and indices arguments can be used to give optional output arrays that must be of the correct size and type (both Int32). The function distance_transform_cdt uses a chamfer type algorithm to calculate the distance transform of the input. the feature transform can be calculated. The morphological_laplace function implements a grey-scale morphological laplace of arrays of arbitrary rank. and return_indices ﬂags can be used to indicate if the distance transform. the feature transform. In addition to the distance transform. SciPy Tutorial .12. Borgefors.6 Distance transforms Distance transforms are used to calculate the minimum distance from each element of an object to the background. or both must be returned. The black_tophat function implements a black top-hat ﬁlter of arrays of arbitrary rank. The return_distances. The structure determines the type of chamfering that is done. The basics of the algorithm used to implement this function is described in: G. or both must be returned. The morphological_gradient function implements a grey-scale morphological gradient of arrays of arbitrary rank. The grey-scale morphological gradient is equal to the difference of a grey-scale dilation and a grey-scale erosion. Computer Vision. the feature transform. 27:321345. 1. The function distance_transform_edt calculates the exact euclidean distance transform of the input. If the structure is equal to ‘chessboard’. Grey-scale opening is equivalent to a grey-scale dilation followed by a grey-scale erosion. and Image Processing. The black top-hat is equal to the difference of the a grey-scale closing and the input. Optionally the sampling along each axis can be given by the sampling parameter which should be a sequence of length equal to the input rank. If the structure is equal to ‘cityblock’ a structure is generated using generate_binary_structure with a squared distance equal to 1. The white top-hat is equal to the difference of the input and a grey-scale opening. The white_tophat function implements a white top-hat ﬁlter of arrays of arbitrary rank. City Block.

IEEE Trans. 0) array([[0.1.1.1. 0. 0]]). 0. [0. 0. [0.1. This parameter is only used in the case of the euclidean distance transform. 1.0. 1. [1. 2. 1. .2. 2]..2. s) (array([[0.0rc1 The algorithm used to implement this function is described in: C.1. [1. 2) 1. 1].1. 0.1.[0. 0. 1. Multi-dimensional image processing (scipy. 1. [0.0. . R.10. 0.3. 0]. 0. 1. The distances and indices arguments can be used to give optional output arrays that must be of the correct size and type (Float64 and Int32). 0. Optionally the sampling along each axis can be given by the sampling parameter which should be a sequence of length equal to the input rank. 2.. The return_distances.0].. 1. 0. The metric must be one of “euclidean”. 0]. Release 0. the feature transform can be calculated. or both must be returned. The most simple approach is probably intensity thresholding.2. .1. 265-270. [0.[0.1]. 0. 1.3. 1. 1. The function label generates an array where each object is assigned a unique number: The label function generates an array where the objects in the input are labeled with an integer index. 1. “A linear time algorithm for computing exact euclidean distance transforms of binary images in arbitrary dimensions.SciPy Reference Guide.1].7 Segmentation and labeling Segmentation is the process of separating objects of interest from the background.0.0. 0]. “cityblock”. 0]. or “chessboard”.1..1. PAMI 25.12.0.1]]) >>> where(a > 1.1. The function distance_transform_bf uses a brute-force algorithm to calculate the distance transform of the input. 1.2]. unless the output parameter is given.1. The function distance_transform_edt can be used to more efﬁciently calculate the exact euclidean distance transform.1.2.2. in which the individual objects still need to be identiﬁed and labeled. Maurer.. 0. R. in which case only the number of objects is returned. [0.ndimage) 87 . 0.0. 0.1. 1. In addition to the distance transform. Note: This function uses a slow brute-force algorithm. 2003. 2. [0. in two dimensions using a four-connected structuring element gives: >>> a = array([[0. which is easily done with numpy functions: >>> a = array([[1.0]]) >>> s = [[0. the function distance_transform_cdt can be used to more efﬁciently calculate cityblock and chessboard distance transforms.1.0]. 2. Raghavan.1. In this case the index of the closest background element is returned along the ﬁrst axis of the result. 0. 0. by replacing each object element (deﬁned by values larger than zero) with the shortest distance to the background (all non-object elements). and return_indices ﬂags can be used to indicate if the distance transform.1. Jr. [1..1.0. 1.12. 0. 0].0]] >>> label(a. or a single number in which the sampling is assumed to be equal along all axes. 1.[0. and V..3. Qi. the feature transform.0. The connectivity of the objects is deﬁned by a structuring element. It returns a tuple consisting of the array of object labels and the number of objects found. [0.0]. [0. 0]]) The result is a binary image.0]. For instance.

. for instance from an estimation of the borders of the objects that can be obtained for instance by derivative ﬁlters.0. 0.. [0. 0. 0]. [0. For instance: >>> l. 0. 1]. 0.. 0. 1. 1.uint8) >>> markers = array([[1.0. 1. 1. 2. 88 Chapter 1. 0.0. Wegenkittl. 0].. 1. 0. For instance: >>> input = array([[0. 1.SciPy Reference Guide. 1. [0. 0]].1. 1. 0.. 0.. 0. 1.0. 0. 0]. One such an approach is watershed segmentation. 0.1. . .1.1. 0]]) If no structuring element is provided. 0]. [0. using an Iterative Forest Transform. 0. 0. 0. 0. . 0.0.1. 0. This is useful if you need to ‘re-label’ an array of object indices. 0.. 0. 1. 0. 0.”. 0. 0]]. 0.. an 8-connected structuring element results in only a single object: >>> a = array([[0. 0. 0. 1. C:26-35.0.. 1.0. . . 0.1.1]] >>> label(a. [0. The input can be of any type. The function watershed_ift generates an array where each object is assigned a unique label. 1. [1. 0. s)[0] array([[0. SciPy Tutorial . 0]. 0.[0. 1. from an array that localizes the object borders. 1. [0. 0.0rc1 These two objects are not connected because there is no way in which we can place the structuring element such that it overlaps with both objects. [0. 1. 1]) >>> l array([1 0 2 0 3]) >>> l = where(l != 2. 0. 0.... 0. 0. However. 0. 0. 0. 0.1. [0.. 1. 0. [0. . Just apply the label function again to the index array. 1. l. 0. 1. . 1.int8) >>> watershed_ift(input. 0. [0. Felkel.[0.. 0].0. 0.. 1. 1. . any value not equal to zero is taken to be part of an object.0]. 0. 0) >>> l array([1 0 0 0 3]) >>> label(l)[0] array([1 0 0 0 2]) Note: The structuring element used by label is assumed to be symmetric. for instance after removing unwanted objects. Eurographics 2001. 0]. 0. 0].1]. markers) array([[1. 0]. [0. R. 1. 1.1]. 0. 0]. The inputs of this function are the array to which the transform is applied. where any non-zero value is a marker. 0.. 1. 0.[0. 0. n = label([1. as described in: P. [0. . [0. np.. 0. There is a large number of other approaches for segmentation. 0]. generated for instance by a gradient magnitude ﬁlter. [1. . Release 0.1. 0. 0]. and M.. 0].1. 1. 1. 0. 0. pp. [0.. . 1.. 0. . one is generated by calling generate_binary_structure (see Binary morphology) using a connectivity of one (which in 2D is the 4-connected structure of the ﬁrst example).0]. 0]. 1].1.10. 1.. 0. [0. 0..0]]) >>> s = [[1. 0. “Implementation and Complexity of the Watershed-from-Markers Algorithm Computed as a Minimal Cost Forest. 0.1]. np. 0... 0. 0. 1. It uses an array containing initial markers for the objects: The watershed_ift function applies a watershed from markers algorithm. 0. 0. 0. Bruckschwaiger. and an array of markers that designate the objects by a unique label.1.

0. ... 2. 2. . 2. 1. 1]]. 0. 0.. -1.1. 0. 2.1]. 0]. -1. 1. 0. 0. 0.. 2. 2. 2. 2. 2. 0. 2. 0. 2. 1.ndimage) 89 . 0. [1. 1. 1. one is generated by calling generate_binary_structure (see Binary morphology) using a connectivity of one (which in 2D is a 4-connected structure.. 1. 1. 1. 2. 1. [-1. Release 0. 0]. 0..0rc1 [1. 0]. 1. 2. 0. 2. 2.1]. -1.1. 0.. [1. 0. 1. [0. markers) array([[1. 1].. 2. -1. -1. 0. 0. 1..12. 0. using an 8-connected structure with the last example yields a different object: >>> watershed_ift(input. -1.. 2. 1.. 2. [1. -1. 2. -1. 0. -1]. 1.SciPy Reference Guide. 0. -1].1]]) array([[-1.. -1]. 0. 1]. 0]. -1.. markers. 0. 2. 2. 2. 2. 1]. 0. 0. [1. 2. 0]. 0. 0. np. 0. -1. -1]. 2. For instance.1. 2. 1]. 0. 1]. [0. 0.. 0. 0. . 1]. 2. 2. [0. [0. 1. . 2. 2. [1. 1. 1]. 0. This may not be the desired effect if the ﬁrst marker was supposed to designate a background object.. 0. 0. 2. -1]]. 0. 0. 0]. 2. . 0]. 1. 0. -1. . 0. 1. 2. [-1.. 0. 0. [1. 1. 2. 0. 1. 0. Multi-dimensional image processing (scipy. 2. 2. 1. 2. dtype=int8) The result is that the object (marker = 2) is smaller because the second marker was processed earlier. 0. 0. [0. -1]. [-1. [1.. 0. [-1. -1. 0]. 0]. 0. 2. 2.) For example.. dtype=int8) The connectivity of the objects is deﬁned by a structuring element. [1. 0.. [-1. 1. 1. 1]]. 2. [-1. 0. 0. -1. -1... 2.. 0. 1. 2. 0. -1. -1. [1. 2. 1. 2. [0. 0. 2. [1. 2. 0. np. 1. 0. [1. 0.10. 0. 0]. 0. 0.. . Therefore watershed_ift treats markers with a negative value explicitly as background markers and processes them after the normal markers. 2. -1]. . 1. 0.int8) >>> watershed_ift(input.. 1. 0. -1]. 0. [1. 0.. replacing the ﬁrst marker by a negative marker gives a result similar to the ﬁrst example: >>> markers = array([[0. 1. [0. 2. 0. If no structuring element is provided. 1]. . [-1. . -1. [1. dtype=int8) Here two markers were used to designate an object (marker = 2) and the background (marker = 1). 1]. [0. 1. 1. structure = [[1. 2. -1.int8) >>> watershed_ift(input. 0. [0. markers) array([[-1. [0. 1]. -1]]. 0. 1. 2. The order in which these are processed is arbitrary: moving the marker for the background to the lower right corner of the array yields a different result: >>> markers = array([[0. 0. 2. -1]. 1. 0]. 0. [0. 0. 0]. 1. 1. [0. 1]]. 0. .. 2. 1]. 2. . 2. -1. 2. 2. . 1. 2. 2.

For instance: >>> a = array([[0.[0. (slice(2.[0. 0. 1.[0. -1.0.1.8 Object measurements Given an array of labeled objects. image[slices[1]]. but can also be used to perform measurements on the individual objects. the properties of the individual objects can be measured.1].0. -1]. -1. 3. -1. None is return instead of a slice. If an index is missing in the label array.[0.[0.0].0.0rc1 [-1. None). 2. labels. and may also be more complicated for other types of measurements. For example: >>> find_objects([1.1. Release 0.0.1.0.1. 2. Therefore a few measurements functions are deﬁned that accept the array of object labels and the index of the object to be measured. [-1. -1]]. 2. 2. [1. -1]. [-1. 0). 2. [1 1]]) >>> a[f[1]] array([[0.1.0]]) >>> l. 2.1. -1.0.0].0.0.1. 1].1.0].SciPy Reference Guide. 6) mask = array([[0. None. 2. 1.0. 1.1]. 2. [0. 2.).sum() 80 That is however not particularly efﬁcient. 2.12.1.reshape(4. 2.0. n = label(a) >>> f = find_objects(l) >>> a[f[0]] array([[1 1]. None). 1. 2. 4].1.0.0.0.)] The list of slices generated by find_objects is useful to ﬁnd the position and dimensions of the objects in the array.1. 2. 2. -1]. 2) 80 For large arrays and small objects it is more efﬁcient to call the measurement functions after slicing the array: 90 Chapter 1.0. 2. 2. 2. [-1. For instance calculating the sum of the intensities can be done by: >>> sum(image.0.1. [-1.1. give the smallest sub-array that fully contains the object: The find_objects function ﬁnds all objects in a labeled array and returns a list of slices that correspond to the smallest regions in the array that contains the object. 3. SciPy Tutorial . 2. 1.[0. dtype=int8) Note: The implementation of watershed_ift limits the data types of the input to UInt8 and UInt16. unless the max_label parameter is larger then zero. in which case only the ﬁrst max_label objects are returned. Say we want to ﬁnd the sum of the intensities of an object in image: >>> >>> >>> >>> image = arange(4 * 6).0. -1].0]]) labels = label(mask)[0] slices = find_objects(labels) Then we can calculate the sum of the elements in the second object: >>> where(labels[slices[1]] == 2. 2. 0].0]. 2.10.1. max_label = 3) [(slice(0. The find_objects function can be used to generate a list of slices that for each object.1. -1. 0]]) find_objects returns slices for all objects.1.

The maximum_position function calculates the position of the maximum of the elements of the object with label(s) given by index.0. using the labels array for the object labels. labels[slices[1]]. and their positions. all elements with a non-zero label value are treated as a single object. If index is None. If index is None. the position of the minimum and the postition of the maximum. 2]) array([178. labels. If label is None.0]) The measurement functions described below all support the index parameter to indicate which object(s) should be measured. The minimum_position function calculates the position of the minimum of the elements of the object with label(s) given by index. all elements of input are used in the calculation. If label is None. all elements with a non-zero label value are treated as a single object. If index is None. using the labels array for the object labels. Thus. of the elements of the object with label(s) given by index. all elements of input are used in the calculation. using the labels array for the object labels. The maximum function calculates the maximum of the elements of the object with label(s) given by index.10. all elements with a non-zero label value are treated as a single object. using the labels array for the object labels.SciPy Reference Guide. [0. The variance function calculates the variance of the elements of the object with label(s) given by index. If index is None. 1. This indicates that all elements where the label is larger than zero should be treated as a single object and measured. 2) 80 Alternatively. If index is None. The sum function calculates the sum of the elements of the object with label(s) given by index. The default value of index is None.0rc1 >>> sum(image[slices[1]]. The result is the same as a tuple formed by the results of the functions minimum. If label is None. and maximum_position that are described above. If label is None. Multi-dimensional image processing (scipy. If index is None. all elements of input are used in the calculation. we can do the measurements for a number of labels with a single function call. all elements with a non-zero label value are treated as a single object. all elements of input are used in the calculation.ndimage) 91 . returning a list of results. maximum. The mean function calculates the mean of the elements of the object with label(s) given by index. using the labels array for the object labels. The result is a tuple giving the minimum. if index is a sequence. The standard_deviation function calculates the standard deviation of the elements of the object with label(s) given by index. or as a tuple of lists. all elements with a non-zero label value are treated as a single object. If label is None. all elements with a non-zero label value are treated as a single object. all elements of input are used in the calculation. using the labels array for the object labels. If label is None. For instance. all elements with a non-zero label value are treated as a single object. the maximum. using the labels array for the object labels. If label is None. all elements of input are used in the calculation. using the labels array for the object labels. 80. If label is None. Functions that return more than one result. a list of the results is returned.12. Release 0. to measure the sum of the values of the background and the second object in our example we give a list of labels: >>> sum(image. If index is a number or a sequence of numbers it gives the labels of the objects that are measured. all elements with a non-zero label value are treated as a single object. all elements with a non-zero label value are treated as a single object. If index is None. all elements of input are used in the calculation. minimum_position. The extrema function calculates the minimum. If index is None. the maximum. all elements of input are used in the calculation. return their result as a tuple if index is a single number. using the labels array for the object labels. If index is a sequence. If index is None. The minimum function calculates the minimum of the elements of the object with label(s) given by index. If label is None. in this case the labels array is treated as a mask deﬁned by the elements that are larger than zero. all elements of input are used in the calculation.

On return. double* input_coordinates. /* calculate the coordinates: */ for(ii = 0. return NULL. SciPy Tutorial . the input_coordinates array must contain the coordinates at which the input is interpolated. If index is None. Histograms are deﬁned by their minimum (min). maximum (max) and the number of bins (bins). all elements with a non-zero label value are treated as a single object. PyObject *args) { double shift = 0. in this case always 1.0rc1 The center_of_mass function calculates the center of mass of the of the object with label(s) given by index. using the labels array for the object labels. 1. This mapping function can also be a C function. all elements of input are used in the calculation. void *callback_data) { int ii. Both are passed by a single PyCObject which is created by the following python extension function: static PyObject * py_shift_function(PyObject *obj. "d". A pointer to this function and a pointer to the shift value must be passed to geometric_transform. *cdata = shift. &shift)) { PyErr_SetString(PyExc_RuntimeError. The function returns an error status. /* get the shift from the callback data pointer: */ double shift = *(double*)callback_data. all elements with a non-zero label value are treated as a single object. ii++) icoor[ii] = ocoor[ii] . you must write your own C extension that deﬁnes the function. all elements of input are used in the calculation. This can be a python function. int input_rank. /* wrap function and callback_data in a CObject: */ 92 Chapter 1. If label is None. using the labels array for the object labels.12.0. Release 0. and deﬁne a Python function that returns a PyCObject containing a pointer to this function.ndimage take a call-back argument. since no error can occur. They are returned as one-dimensional arrays of type Int32.SciPy Reference Guide. int output_rank. /* return OK status: */ return 1. passing the current coordinates in the output_coordinates array. } This function is called at every element of the output array. which generally will be much more efﬁcient. For example to implement a simple shift function we deﬁne the following function: static int _shift_function(int *output_coordinates. "invalid parameters"). You can pass it a python callable object that deﬁnes a mapping from all output coordinates to corresponding coordinates in the input array.9 Extending ndimage in C A few functions in the scipy. since the overhead of calling a python function at each element is avoided. If index is None. but also a PyCObject containing a pointer to a C function. To use this feature.10. If label is None. ii < irank. The ranks of the input and output array are passed through output_rank and input_rank. An example of a function that supports this is geometric_transform (see Interpolation functions).shift. The histogram function calculates a histogram of the of the object with label(s) given by index. which is a pointer to void. } else { /* assign the shift to a dynamically allocated location: */ double *cdata = (double*)malloc(sizeof(double)). if (!PyArg_ParseTuple(args. The value of the shift is passed through the callback_data argument.

which is returned. The function generic_filter (see Generic ﬁlter functions) accepts a callback function with the following prototype: 1. Therefore we give here the prototypes of the callback functions.6375]]) C callback functions for use with ndimage functions must all be written according to this scheme. } } The value of the shift is obtained and then assigned to a dynamically allocated memory location.12. } To use these functions. 0. [ 0. [ 0.8125 6. 1.float64) >>> fnc = example. 0. methods). cdata. If an error occurs. NULL} }. an extension module is built: static PyMethodDef methods[] = { {"shift_function". Obviously. METH_VARARGS. {NULL.0rc1 return PyCObject_FromVoidPtrAndDesc(_shift_function. void *cdata) { if (cdata) free(cdata). This destructor is very simple: static void _destructor(void* cobject. fnc) array([[ 0. 8. void initexample(void) { Py_InitModule("example". } This extension can then be used in Python. for example: >>> import example >>> array = arange(12).10 Functions that support C callback functions The ndimage functions that support C callback functions are described here.shift_function(0. you should normally set the python error status with an informative message before returning. NULL. or 1 otherwise.10.2625 9. Both this data pointer and the function pointer are then wrapped in a PyCObject. 4.reshape=(4. [ 0. otherwise. ""}.astype(np.3625 2. Multi-dimensional image processing (scipy. _destructor).7375]. Release 0.SciPy Reference Guide.5) >>> geometric_transform(array. a default error message is set by the calling function. PyCObject_FromVoidPtr may be used instead. 1. The callback functions must return an integer error status that is equal to zero if something went wrong.ndimage) 93 . The next section lists the ndimage functions that acccept a C callback function and gives the prototype of the callback function. ]. All these callback functions accept a void callback_data pointer that must be wrapped in a PyCObject using the Python PyCObject_FromVoidPtrAndDesc function. which can also accept a pointer to a destructor function to free any memory allocated for callback_data. (PyCFunction)py_shift_function.1875]. Additionally. a pointer to a destructor function is given. 3).12. the prototype of the function that is provided to these functions must match exactly that what they expect. If callback_data is not needed. 0. that will free the memory we allocated for the shift value when the PyCObject is destroyed.

SciPy Tutorial . 1. The function generic_filter1d (see Generic ﬁlter functions) accepts a callback function with the following prototype: The calling function iterates over the lines of the input and output arrays. The length of the output line is passed through output_length. and the number of elements within the footprint through ﬁlter_size.mat ﬁle. Within sio. From time to time you may ﬁnd yourself re-using this machinery. 94 Chapter 1. mdict.routines. appendmat. calling the callback function at each element. you will ﬁnd the mio module .io) See Also: numpy-reference.13 File IO (scipy.matlab.SciPy Reference Guide. calling the callback function at each element. The callback function must return the coordinates at which the input must be interpolated in input_coordinates.10... calling the callback function at each line. The function geometric_transform (see Interpolation functions) expects a function with the following prototype: The calling function iterates over the elements of the output array. You’ll ﬁnd: sio.]) Getting started: >>> import scipy. The calculated valued should be returned in the return_value argument. The coordinates of the current output element are passed through output_coordinates.containing the machinery that loadmat and savemat use. mdict[.13. The callback function should apply the 1D ﬁlter and store the result in the array passed through output_line.savemat These are the high-level functions you will most likely use. and the result is copied into the array that is passed through the input_line array.0rc1 The calling function iterates over the elements of the input and output arrays. You’ll also ﬁnd: sio. If you are using IPython.matlab This is the package from which loadmat and savemat are imported. The current line is extended according to the border conditions set by the calling function. The rank of the input and output arrays are given by input_rank and output_rank respectively. The length of the input line (after extension) is passed through input_length. The elements within the footprint of the ﬁlter at the current element are passed through the buffer parameter.1 MATLAB ﬁles loadmat(ﬁle_name. try tab completing on sio. **kwargs[. . Release 0.io as sio Load MATLAB ﬁle Save a dictionary of names and arrays into a MATLAB-style .io (in numpy) 1.loadmat sio. appendmat]) savemat(ﬁle_name.

:.] [ 2. Octave has MATLAB-compatible save / load functions.].mat Now. 11. 9. let’s start in Octave. 5. 7. 4.]]]). 10.:.. 8..10. to Python: >>> mat_contents = sio. [1 3 4]) a = ans(:. 11. 3.3.mat’) >>> print mat_contents {’a’: array([[[ 1. File IO (scipy.mat a % MATLAB 6 compatible octave:4> ls octave_a. 5. ’__version__’: ’1. you want to pass some variables from Scipy / Numpy into MATLAB. 12. 4) Now let’s try the other way round: 1.... 7.:.0 MAT-file. Or. ’__globals__’: []} >>> oct_a = mat_contents[’a’] >>> print oct_a [[[ 1. ’__header__’: ’MATLAB 5..13.SciPy Reference Guide. Release 0. [ 3.mat ﬁle that you want to read into Scipy. 6..]. 10.. 8..0rc1 How do I start? You may have a .loadmat(’octave_a. [ 2.1) = 1 2 3 ans(:. 6.:. 2010-05-30 02:13:40 UTC’.3) = 7 8 9 ans(:.]]] >>> print oct_a. Start Octave (octave at the command line for me): octave:1> a = 1:12 a = 1 2 3 4 5 6 7 8 9 10 11 12 octave:2> a = reshape(a.shape (1. To save us using a MATLAB license.4) = 10 11 12 octave:3> save -6 octave_a.io) 95 .] [ 3. written by Octave 3.mat octave_a.0’. 12.2) = 4 5 6 ans(:.2. 4. 9.

’field2’. 1. where a single struct is an array of shape (1. 1). SciPy Tutorial .mat’.savemat(’np_vector. 2) my_struct = { 96 Chapter 1.py:196: FutureWarning oned_as=oned_as) Then back to Octave: octave:5> load np_vector. oned_as=’row’) We can load this in Octave or MATLAB: octave:8> load np_vector. except the ﬁeld names must be strings.SciPy Reference Guide. structs are in fact arrays of structs. In the future.0rc1 >>> import numpy as np >>> vect = np.mat octave:6> vect vect = 0 1 2 3 4 5 6 7 8 9 octave:7> size(vect) ans = 10 1 Note the deprecation warning.mat octave:9> vect vect = 0 1 2 3 4 5 6 7 8 9 octave:10> size(vect) ans = 1 10 MATLAB structs MATLAB structs are a little bit like Python dicts. As for all objects in MATLAB. Release 0.mat’. octave:11> my_struct = struct(’field1’.shape (10.) >>> sio. {’vect’:vect}. Any MATLAB object can be a value of a ﬁeld. this will default to row instead of column: >>> sio.savemat(’np_vector. {’vect’:vect}) /Users/mb312/usr/local/lib/python2.arange(10) >>> print vect.6/site-packages/scipy/io/matlab/mio.10. The oned_as keyword determines the way in which one-dimensional vectors are stored.

[[2. If you want all length 1 dimensions squeezed out.0]]. Release 0. ’|O8’)]).mat my_struct We can load this in Python: >>> mat_contents = sio.8. struct_as_record=False) >>> oct_struct = mat_contents[’my_struct’] >>> oct_struct[0. struct_as_record=False. in MATLAB.mat’.0]])]].io) 97 .13.it’s a scalar Traceback (most recent call last): 1.loadmat(’octave_struct. with ﬁelds named for the struct ﬁelds.0 >>> oct_struct = mat_contents[’my_struct’] >>> print oct_struct. it’s more convenient to load the MATLAB structs as python objects rather than numpy structured arrarys .0] >>> print val ([[1.0’. ’__version__’: ’1.]] >>> print val. MATLAB structs come back as numpy structured arrays.0]. squeeze_me=True) >>> oct_struct = mat_contents[’my_struct’] >>> oct_struct.0]]. try this: >>> mat_contents = sio.shape # but no . [[2. and we replicate that when we read into Scipy.0). ’|O8’). You can see the ﬁeld names in the dtype output above. ’__header__’: ’MATLAB 5.field1 array([[ 1. (’field2’.0] and: octave:13> size(my_struct) ans = 1 1 So.SciPy Reference Guide. 1) >>> val = oct_struct[0.mat’) >>> print mat_contents {’my_struct’: array([[([[1.]] >>> print val[’field2’] [[ 2. Note also: >>> val = oct_struct[0.it can make the access syntax in python a bit more similar to that in MATLAB. dtype=[(’field1’. ’|O8’)] In this version of Scipy (0.loadmat(’octave_struct.10.loadmat(’octave_struct. >>> mat_contents = sio.]]) struct_as_record=False works nicely with squeeze_me: >>> mat_contents = sio.mat’.0]]) >>> print val[’field1’] [[ 1. In order to do this.0rc1 field1 = field2 = } 1 2 octave:12> save -6 octave_struct.loadmat(’octave_struct.shape () Sometimes. ’|O8’). use the struct_as_record=False parameter to loadmat.dtype [(’field1’. squeeze_me=True) >>> oct_struct = mat_contents[’my_struct’] >>> oct_struct. File IO (scipy. the struct array must be at least 2D.mat’. (’field2’.shape (1.

matlab. 3]} my_cells = { [1. ’f8’). dtype=dt) >>> print arr [(0. One simple method is to use dicts: >>> a_dict = {’field1’: 0.mat’) >>> oct_cells = mat_contents[’my_cells’] >>> print oct_cells. and that is how we load them into numpy.10.mat my_cells 3 1 Back to Python: >>> mat_contents = sio.2] = 2 } octave:15> save -6 octave_cells. {’a_dict’: a_dict}) loaded as: octave:21> load saved_struct octave:22> a_dict a_dict = { field2 = a string field1 = 0.mio5_params. In fact they are most similar to numpy object arrays.0.0.1] = [1. ’’)] >>> arr[0][’f1’] = 0.loadmat(’octave_cells. Release 0. ’field2’: ’a string’} >>> sio.0rc1 File "<stdin>".savemat(’saved_struct.mat’.0 Saving struct arrays can be done in various ways. (’f2’.mat’. line 1. octave:14> my_cells = {1. ’’) (0.field1 1.50000 } You can also save structs back again to MATLAB (or Octave in our case) like this: >>> dt = [(’f1’.zeros((2. in <module> AttributeError: ’mat_struct’ object has no attribute ’shape’ >>> print type(oct_struct) <class ’scipy. ’S10’)] >>> arr = np.SciPy Reference Guide. SciPy Tutorial .io. {’arr’: arr}) MATLAB cell arrays Cell arrays in MATLAB are rather like python lists.5 >>> arr[0][’f2’] = ’python’ >>> arr[1][’f1’] = 99 >>> arr[1][’f2’] = ’not perl’ >>> sio.mat_struct’> >>> print oct_struct.). in the sense that the elements in the arrays can contain any type of MATLAB object.dtype object 98 Chapter 1.5. [2.savemat(’np_struct_arr.

2 IDL ﬁles readsav(ﬁle_name[.mat’.io.dtype float64 Saving to a MATLAB cell array just involves making a numpy object array: >>> obj_arr = np. ARFF is a text ﬁle format which support numerical. data]) Save the dictionary “data” into a module and shelf named save.10.13. dtype=np.io) 99 . 1. precision]) Queries the contents of the Matrix Market ﬁle ‘ﬁlename’ to Reads the contents of a Matrix Market ﬁle ‘ﬁlename’ into a matrix. rate. File IO (scipy.0] >>> print val [[ 1. comment. {’obj_arr’:obj_arr}) octave:16> load np_cells.zeros((2. 1.arff) Module to read ARFF ﬁles.]) Read an IDL . a[. idict.mat octave:17> obj_arr obj_arr = { [1.wavfile) read(ﬁle) write(ﬁlename. Release 0.1] = a string } 1.6 Arff ﬁles (scipy..4 Other save_as_module([ﬁle_name.SciPy Reference Guide.13.3 Matrix Market ﬁles mminfo(source) mmread(source) mmwrite(target. string and data values.). Writes the sparse or dense matrix A to a Matrix Market formatted ﬁle.13.0rc1 >>> val = oct_cells[0.sav ﬁle 1.]] >>> print val. data) Return the sample rate (in samples/sec) and data from a WAV ﬁle Write a numpy array as a WAV ﬁle 1.object) >>> obj_arr[0] = 1 >>> obj_arr[1] = ’a string’ >>> print obj_arr [1 a string] >>> sio.13.13.io. ﬁeld. which are the standard data format for WEKA. The format can also represent missing data and sparse data. 1.1] = 1 [2.savemat(’np_cells.13.5 Wav sound ﬁles (scipy. . python_dict..

0. 1.25. version]) A ﬁle object for NetCDF data. ’|S6’)]) >>> meta Dataset: foo width’s type is numeric height’s type is numeric color’s type is nominal.black} .0.io.. 3...0rc1 See the WEKA website for more details about arff format and available datasets.red .3.75..25..75. ’<f8’).. (4.io import arff >>> from cStringIO import StringIO >>> content = """ . ’blue’. SciPy Tutorial . Allows reading of NetCDF ﬁles (version of pupynere package) 1..SciPy Reference Guide. @relation foo .netcdf) netcdf_file(ﬁlename[.5. 5. ’red’)].green .0..weave) 1.. 4.00.4. ’green’). @attribute color {red.loadarff(f) >>> data array([(5.5..14 Weave (scipy.. ’<f8’).10.yellow.0. meta = arff..3. mode. (3. """ >>> f = StringIO(content) >>> data.13. Examples >>> from scipy. 3. Release 0. @data . ’yellow’.blue.0..14.. range is (’red’. mmap..green. @attribute height numeric .7 Netcdf (scipy... (’color’. ’blue’). @attribute width numeric .1 Outline 100 Chapter 1. 3. dtype=[(’width’.. 4.blue . (’height’. ’black’) loadarff(f) Read an arff ﬁle. ’green’.

10.weave) – Outline – Introduction – Requirements – Installation – Testing * Testing Notes: – Benchmarks – Inline * More with printf * More examples · Binary search · Dictionary Sort · NumPy – cast/copy/transpose · wxPython * Keyword Option * Inline Arguments * Distutils keywords · Keyword Option Examples · Returning Values · The issue with locals() · A quick look at the code * Technical Details * Passing Variables in/out of the C/C++ code * Type Conversions · NumPy Argument Conversion · String. Release 0. and Dictionary Conversion · File Conversion · Callable.14. List.0rc1 Contents • Weave (scipy. Weave (scipy. Tuple. and Module Conversion · Customizing Conversions * The Catalog · Function Storage · Catalog search paths and the PYTHONCOMPILED variable – Blitz * Requirements * Limitations * NumPy efﬁciency issues: What compilation buys you * The Tools · Parser · Blitz and NumPy * Type deﬁnitions and coersion * Cataloging Compiled Functions * Checking Array Sizes * Creating the Extension Module – Extension Modules * A Simple Example * Fibonacci Example – Customizing Type Conversions – Type Factories – Things I wish weave did 1.weave) 101 . Instance.SciPy Reference Guide.

pycpp. but for anything complex. The next several sections give the basics of how to use weave. if you’re developing new extension modules or optimizing Python functions in C. and I’ve had reports that it works on Debian also (thanks Pearu). for wrapping legacy code.1. If you’re wrapping legacy code. As of yet. reference counting.2 and 2.96). blitz() was the original reason weave was built. etc. so it uses whatever compiler was originally used to build Python.weave (below just weave) package provides tools for including C/C++ code within in Python code. The examples here are given as if it is used as a stand alone package. I guess it would be possible to build such a tool on top of weave. The one issue I’m not sure 102 Chapter 1.0rc1 1. • C++ compiler weave uses distutils to actually build extension modules.1. you’ll need to do some conversions. tools for building Python extension modules already exists (SWIG.3-6). Probably 2. Things like function overloading are probably easily implemented in weave and it might be easier to mix Python/C code in function calls.14.py install). However.3 Requirements • Python I use 2. but with good tools like SWIG around. weave.95. weave does quite a bit for you. SciPy Tutorial .10.). Further. We’ll discuss what’s happening under the covers in more detail later on.ext_tools() might be the tool for you. it is also possible to slow things down. it also works ﬁne as a standalone package (you can install from scipy/weave with python setup. I’m not sure the effort produces any new capabilities. On Unix gcc is the preferred choice because I’ve done a little testing with it.14. So. If you are using from within scipy. it’d be helpful if you know something about writing Python extensions. stick with SWIG. weave itself requires a C++ compiler. This weave does not do.blitz() translates Python NumPy expressions to C++ for fast execution. All testing has been done with gcc. it is more ﬂexible to specify things the other way around – that is how C types map to Python types. The weave. However. SWIG and friends are deﬁnitely the better choice. If you used a C++ compiler to build Python. the ext_tools module provides classes for building extension modules within Python. SIP. if you’re wrapping a gaggle of legacy functions or classes. The inline and blitz provide new functionality to Python (although I’ve recently learned about the PyInline project which may offer similar functionality to inline).SciPy Reference Guide.1 with gcc 2. I’m not sure where weave ﬁts in this spectrum. All tests also pass on Linux (RH 7. Inlining C/C++ code within Python generally results in speed ups of 1. CXX.. your probably ﬁne. Generally algorithms that require a large number of calls to the Python API don’t beneﬁt as much from the conversion to C/C++ as algorithms that have inner loops completely convertable to C. but nothing beyond this comes to mind.2 Introduction The scipy.95. Most of weave’s functionality should work on Windows and Unix. This offers both another level of optimization to those who need it. This is great for inline(). and weave. For those interested in building extension libraries. most testing has been done on Windows 2000 with Microsoft’s C++ compiler (MSVC) and with gcc (mingw32 2. although some of its functionality requires gcc or a similarly modern C++ compiler that handles templates well. Release 0. but. Note: weave is actually part of the SciPy package. There are three basic ways to use weave.5x to 30x speed-up over algorithms written in pure Python (However. weave is set up so that you can customize how Python types are converted to C types in weave. If you don’t know C or C++ then these docs are probably of very little help to you. It is closest in ﬂavor to CXX in that it makes creating new C/C++ extension modules pretty easy. On the other hand.. you can use from scipy import weave and the examples will work identically. Up to now. 1. and an easy way to modify and extend any supported extension libraries such as wxPython and hopefully VTK soon. and others).0 or higher should work. but I expect the majority of compilers should work for inline and ext_tools.inline() function executes C code directly within Python. Serious users will need to at least look at the type conversion section to understand how Python variables map to C/C++ types and how to customize this behavior. One other note.

you’ll need gcc for blitz() as the MSVC compiler doesn’t handle templates well...4 Installation There are currently two ways to get weave...SciPy Reference Guide.... On Windows.scalar_spec >>> weave...... . First..... or is this a gcc-ism? For blitz().... >>> import weave >>> weave.. Other versions are likely to work........... ....... 1...distutils which is used by weave..14. ﬁre up python and run its unit tests. My advice is to use gcc for now unless your willing to tinker with the code some. usually several minutes. Its likely that KAI’s C++ compiler and maybe some others will work........ ... you can do this by running test() for that speciﬁc module.... If you only want to test a single module of the package............... I have not tried Cygwin. ..... 2... ---------------------------------------------------------------------Ran 184 tests in 158..5 Testing Once weave is installed. If you get errors... since weave is useful outside of the scientiﬁc community. Release 0..14.... you’ll need a reasonably recent version of gcc... On Unix with remote ﬁle systems... 1.... Some tests are known to fail at this point.... Second. but I haven’t tried.weave) 103 ......... weave is part of SciPy and installed automatically (as a subpackage) whenever SciPy is installed.. spews tons of output and a few warnings .418s OK >>> This takes a while...scalar_spec....10....py ﬁle to simplify installation. • NumPy The python NumPy module is required for blitz() to work and for numpy.. Weave (scipy..2 works on windows and 2. Please report errors that you ﬁnd...... it has been setup so that it can be used as a stand-alone module...... it should run about 180 tests and spew some speed results along the way.... Instructions for installing should be found there as well.96 looks ﬁne on Linux. ---------------------------------------------------------------------Ran 7 tests in 23.. I’ve had it take 15 or so minutes.......test() runs long time.. Again. setup........14...... >>> import weave. Is this standard across Unix compilers.... .. so please report success if it works for you...284s 1.. In the end............ either MSVC or gcc (mingw32) should work......... they’ll be reported at the end of the output.0rc1 about is that I’ve hard coded things so that compilations are linked with the stdc++ library......test() .... The stand-alone version can be downloaded from here........95.

and gcc-2.95.. 2D Image 512x512 Electromagnetics (FDTD) 100x100x100 Speed up 3. you can pretty much divide the double precision speed up by 3 and you’ll be close. Table 1.2 in the path.95.0rc1 Testing Notes: • Windows 1 I’ve had some test fail on windows machines where I have msvc.01 8. Anyone know how to ﬁx this? • wxPython wxPython tests are not enabled by default because importing wxPython on a Unix machine without access to a X-term will cause the program to exit.it may just be an quirk on my machine (unlikely)..95.3-6.2 (in c:gcc-2. My environment has c:gcc in the path and does not have c:gcc-2. I get gcc-2. The blitz() comparisons are shown compared to NumPy. The test process runs very smoothly until the end where several test using gcc fail with cpp0 not found by g++. Anyone know of a safe way to detect whether wxPython can be imported and whether a display exists on a machine? 1. NumPy (at least on my machine) is signiﬁcantly worse for double precision than it is for single precision calculations.95.1: inline and ext_tools Algorithm binary search ﬁbonacci (recursive) ﬁbonacci (loop) return None map dictionary sort vector quantization Speed up 1.2. 104 Chapter 1. I get gcc-2.2.61 The benchmarks shown blitz in the best possible light.10.2). their here for the curious. Look at the example scripts for more speciﬁcs about what problem was actually solved by each run. Release 0.2: blitz – double precision Algorithm a = b + c 512x512 a = b + c + d 512x512 5 pt avg.95.50 82. gcc-2.40 Table 1. so we have to undo its work in some places).54 37.6 Benchmarks This section has not been updated from old scipy weave and Numeric. ??huh??. If anyone else sees this. Testing with the gcc.2 installation always works. • Windows 2 If you run the tests from PythonWin or some other GUI tool. their value is limited. Without more information about what the test actually do.05 4. If I check os. This section has a few benchmarks – thats all people want to see anyway right? These are mostly taken from running ﬁles in the weave/example directory and also from the test scripts.3-6 (in c:gcc) all installed. let me know .10 9. These examples are run under windows 2000 using Microsoft Visual C++ and python2. If your interested in single precision results. Still.environ[’PATH’] still has c:gcc ﬁrst in it and is not corrupted (msvc/distutils messes with the environment variables.95.14 1.14. Speed up is the improvement (degredation) factor of weave compared to conventional Python functions.1 on a 850 MHz PIII laptop with 320 MB of RAM. The os.20 2. SciPy Tutorial .17 0. ﬁlter.95...SciPy Reference Guide.system(‘gcc -v’) before running tests.59 9. you’ll get a ton of DOS windows popping up periodically as weave spawns the compiler multiple times. Very annoying. If I check after running tests (and after failure).

The ﬁrst time you run this. Variables in the local and global Python scope are also available in the C/C++ code. This doesn’t mean that inline doesn’t work in these environments – it only means that standard out in C is not the same as the standard out for Python in these cases. This is because weave stores a catalog of all previously compiled functions in an on disk cache.95. Algorithms that are highly computational or manipulate NumPy arrays can see much larger improvements. This is because the C code is writing to stdout. Also. the backslash character is not interpreted as an escape character. the contents of mutable objects can be changed within the C/C++ code and the changes remain after the C code exits and returns to Python.cpp Creating library C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp \Release\sc_e013937dbc8c647ac62438874e5795131. The examples/vq.30 times faster. its just a bit annoying. there will be a much shorter delay in the fractions of seconds range. you’re unlikely to see the output. and so it isn’t necessary to use a double backslash to indicate that the ‘n’ is meant to be interpreted in the C printf statement 1. this takes about 1. cataloged for future use.inline(’printf("%d\\n".. for more complicated algorithms. there will be a pause while the code is written to a . * Anyone know how to turn this off?* This example also demonstrates using ‘raw strings’. Here we have a simple printf statement that writes the Python variable a to the screen. Although effort has been made to reduce the overhead associated with calling inline. In raw strings. And. The simple printf example is actually slower by 30% or so than using Python print statement. Values are returned from the C/C++ code through a special argument called return_val.a). On windows (850 MHz PIII).inline(r’printf("%d\n".0rc1 1. When it sees a string that has been compiled.7 Inline inline() compiles and executes C/C++ code on the ﬂy.a). Weave (scipy.’.SciPy Reference Guide. Note: If you try the printf example in a GUI shell such as IDLE. and executed. it loads the already compiled module and executes the appropriate function. PyShell. the speed up can be worth while – anywhwere from 1.lib and object C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\sc_e013937dbc8c647ac62438874e 1 Nothing bad is happening. it is not difﬁcult to create code fragments that are 8-10 times slower using inline than equivalent Python.5 seconds when using Microsoft’s C++ compiler (MSVC) and 6-12 seconds using gcc (mingw32 2. All subsequent executions of the code will happen very quickly because the code only needs to be compiled once. its most basic form. var_list is a list of variable names that are passed from Python into C/C++. The r preceeding the code string in the last example denotes that this is a ‘raw string’. Non input/output functions will work as expected. If you kill and restart the interpreter and then execute the same code fragment again.14. instead of to the GUI window.10. c_code is a string of valid C/C++ code.14. Release 0.’. var_list) requires two arguments.[’a’]) sc_e013937dbc8c647ac62438874e5795131.cpp ﬁle. More with printf MSVC users will actually see a bit of compiler output that distutils does not supress the ﬁrst time the code executes: >>> weave. Values are passed to the C/C++ code by assignment much like variables are passed into a standard Python function.weave) 105 .[’a’]) 1 In this.2). etc. PythonWin.py ﬁle shows a factor of 30 or more improvement on the vector quantization algorithm that is used heavily in information theory and classiﬁcation problems. compiled into an extension module. loaded into Python. (more on this later) Here’s a trivial printf example using inline(): >>> import weave >>> a = 1 >>> weave. it is still less efﬁcient for simple code snippets than using equivalent Python code. inline(c_code. Algorithms that have to manipulate Python objects (sorting a list) usually only see a factor of 2 or so improvement.5. However.

For inspiration. What happens if a is a string? inline will happily.. >>> a = ’string’ >>> def protected_printf(a): . In other situations. Binary search Lets look at the example of searching a sorted list of integers for a value.[’a’]) string XXX This is a little convoluted. the format statement needs to be changed..a). SciPy Tutorial . weave. More examples This section shows several more advanced uses of inline. Release 0. so you have to do a little more work. The printf statement in these examples is formatted to print out integers.’.SciPy Reference Guide.10.’.inline(’std::cout << a << std::endl. His recipe follows: 106 Chapter 1. Instead it uses CXX Py::String type. however.inline(r’printf("%d\n". If your C code contains a lot of strings and control characters.a).[’a’]) 32956972 In this case.[’a’]) 1 >>> a = ’string’ >>> weave. raw strings might make things easier. the stream objects live in the std (standard) namespace and thus require the use of std::. For printing strings. weave doesn’t convert strings to char*. and execute the code.’. C++ streams provide a way of a single statement that works for integers and strings. >>> weave.c_str()).inline(’std::cout << a << std::endl.’. The result? >>> a = ’string’ >>> weave. Most of the time. >>> a = ’string’ >>> weave.std::string(a). the result is non-sensical. standard strings work just as well. however. By default.[’a’]) string Examples using printf and cout are included in examples/print_example. compile a new version of the code to accept strings as input..inline(r’printf("%s\n". Also. it might produce a compile time error because a is required to be an integer at some point in the code.py. or it could produce a segmentation fault. we’ll use Kalle Svensson’s binary_search() algorithm from the Python Cookbook. Or maybe to char*. It includes a few algorithms from the Python Cookbook that have been re-written in inline C to improve speed as well as a couple examples using NumPy and wxPython. For the given printing task. Here we convert it to a C++ std::string and then ask cor the char* version. Perhaps strings should convert to std::string objects instead of CXX objects.. but also non-fatal.0rc1 instead of by Python. C/C++ code fragments often have to change to accept different types.. Its possible to protect against passing inline arguments of the wrong data type by using asserts in Python.[’a’]) >>> protected_printf(1) 1 >>> protected_printf(’string’) AssertError. As in this case. assert(type(a) == type(1)) ..’.inline(r’printf("%d\n".

SciPy Reference Guide.10.[’seq’. it is translated to a CXX list object. PyObject *py_val.t): # do a little type checking in Python assert(type(t) == type(1)) assert(type(seq) == type([])) # now the C code code = """ #line 29 "binary_search. min = 0. Here’s the new mixed Python/C function: def c_int_binary_search(seq. for(. The C version below is specialized to handle integer values.14.1 while 1: if max < min: return -1 m = (min + max) / 2 if seq[m] < t: min = m + 1 elif seq[m] > t: max = m .length() . The variables seq and t don’t need to be declared beacuse weave handles converting and declaring them in the C code. max. For example. A little more about CXX and its class methods. Weave (scipy. etc. } } """ return inline(code.weave) 107 . t): min = 0. seq is a Python list.’t’]) We have two variables seq and t passed in. else { return_val = Py::new_reference_to(Py::Int(m)). if (val < t) min = m + 1. max = len(seq) . There is a little type checking done in Python to assure that we’re working with the correct data types before heading into C. break. is in the ** type conversions ** section.) { if (max < min ) { return_val = Py::new_reference_to(Py::Int(-1)). or at least object orientify. else if (val > t) max = m . etc. working with Python objects in C/C++. t is guaranteed (by the assert) to be an integer.1 else: return m This Python version works for arbitrary Python data types. must be declared – it is C after all. break. By default. } m = (min + max) /2. The basics are that the CXX provides C++ class equivalents for Python objects that simplify. int max = seq. 1.1. All other temporary variables such as min..py" int val.m).ptr().length() returns the length of the list.1.0rc1 def binary_search(seq. seq."val"). val = py_to_int(PyList_GetItem(seq. Full documentation for the CXX library can be found at its website. m. Python integers are converted to C int types in the transition from Python to C. Release 0.

159999966621 speed of bisect: 0.10. Both sets of spec ﬁles will probably stick around.110000014305 speed up: 1. CXX classes and functions handle the dirty work. SciPy Tutorial . an exception is raised.ptr() which is the underlying PyObject* of the List object. it just a question of which becomes the default. If we move down to searching a 10000 element list. but my preliminary efforts didn’t produce one. but it’s nice for debugging. Anyone have other explanations or faster code. There are two main differences. Release 0.py Binary search for 3000 items in 1000000 length list of integers: speed in python: 0. So what was all our effort worth in terms of efﬁciency? Well not a lot in this case.SciPy Reference Guide.0900000333786 speed up: 1. so moving to C isn’t that big of a win. The simple Python seq[i] call balloons into a C Python API call to grab the value out of the list and then a separate call to py_to_int() that converts the PyObject* to an integer. Most of the algorithm above looks similar in C to the original Python code. If we run it on a 1 million element list and run the search 3000 times (for 0. The following code converts the integer m to a CXX Int() object and then to a PyObject* with an incremented reference count using Py::new_reference_to().2999). we get roughly a 50-75% improvement depending on whether we use the Python asserts in our C version. Because the algorithm is nice. You’ll have to handle reference counting issues when setting this variable. this may improve the C/Python speed up.78 So. If the compilation fails because of the syntax error in the code. return_val is an automatically deﬁned variable of type PyObject* that is returned from the C code back to Python. please let me know. the overhead in using them appears to be relatively high. This submission.45 speed in c(no asserts): 0.121000051498 speed up: 1. so the standard Python API was used on the seq. This removes the need for most error checking code. by Alex Martelli. If either of the checks fail. All CXX functions and classes live in the namespace Py::. In this example. It is worth note that CXX lists do have indexing operators that result in code that looks much like Python. The examples/binary_search. Dictionary Sort The demo in examples/dict_sort. SCXX has a few less features.0rc1 Note: CXX uses templates and therefore may be a little less portable than another alternative by Gordan McMillan called SCXX which was inspired by CXX.32 speed in c: 0. the advantage evaporates. return_val = Py::new_reference_to(Py::Int(m)). I think the log(N) algorithm is to blame. If there are ways to reduce conversion overhead of values. the error will be reported as an error in the Python ﬁle “binary_search. demonstrates how to return the values from a dictionary sorted by their keys: 108 Chapter 1. Hopefully xxx_spec ﬁles will be written for SCXX in the future. The second big differences shows up in the retrieval of integer values from the Python list. there just isn’t much time spent computing things. The entire C++ code block is executed with in a try/catch block that handles exceptions much like Python does. The #line directive that is the ﬁrst line of the C code block isn’t necessary.py is another example from the Python CookBook. here are the results we get: C:\home\ej\wrk\scipy\weave\examples> python binary_search. The ﬁrst is the setting of return_val instead of directly returning from the C code with a return statement.py” with an offset from the given line number (29 here). py_to_int() includes both a NULL cheack and a PyInt_Check() call as well as the conversion call.py ﬁle runs both Python and C versions of the functions As well as using the standard bisect module. Even smaller lists might result in the Python version being faster. It doesn’t use templates so it should compile faster and be more portable. However. but it appears to me that it would mesh with the needs of weave quite well. I’d like to say that moving to NumPy lists (and getting rid of the GetItem() call) offers a substantial speed up. and we’ll be able to compare on a more empirical basis.

keys) Alex provides 3 algorithms and this is the 3rd and fastest of the set.blitz_tools import blitz_type_factories from weave import scalar_spec from weave import inline def _cast_copy_transpose(type.py Dict sort of 1000 items for 300 iterations: speed in python: 0. 4] speed in c: 0. i < keys.keys(). Py::List items(keys. the col-major order of the underlying Fortran algorithms.typecode() == type: cast_array = copy. Here is the Python version of the function using standard NumPy operations. for(int i = 0.length()).sort(). but uses Python API calls to manipulate their contents. array): if a. 3. 2.i. Again.SciPy Reference Guide. PyList_SetItem(items. 3.weave) 109 . Py_XINCREF(item).transpose(a)) else: cast_array = copy.151000022888 speed up: 2. This shouldn’t happen.copy(NumPy. this choice is made for speed.item).ptr(). For small matrices (say 100x100 or less).verbose=1) Like the original Python function. a signiﬁcant portion of the common routines such as LU decompisition or singular value decompostion are spent in this setup routine. 4] NumPy – cast/copy/transpose CastCopyTranspose is a function called quite heavily by Linear Algebra routines in the NumPy library.a_2d): 1.12 [0. Weave (scipy.ptr(). item = PyDict_GetItem(adict. while more complicated. """ return inline_tools. It uses CXX objects for the most part to declare python types in C++. def _castCopyAndTranspose(type. C:\home\ej\wrk\scipy\weave\examples> python dict_sort.copy(NumPy. PyObject* item = NULL.transpose(a).length(). keys. Release 0. is about a factor of 2 faster than Python.10.item). The C version of this same algorithm follows: def c_sort(adict): assert(type(adict) == type({})) code = """ #line 21 "dict_sort. The C++ version.[’adict’].keys() keys. Its needed in part because of the row-major memory layout of multi-demensional Python (and C) arrays vs.inline(code. 2.i).py" Py::List keys = adict.get.sort() return map(adict. 1.ptr(). the C++ version can handle any Python dictionary regardless of the key/value pair types.14. } return_val = Py::new_reference_to(items). 1.0rc1 def sortedDictValues3(adict): keys = adict.astype(type)) return cast_array And the following is a inline C version of the same function: from weave.i++) { item = PyList_GET_ITEM(keys.319999933243 [0.

This is why they aren’t the default type used for Numeric arrays (and also because most compilers can’t compile blitz arrays.70 wxPython inline knows how to handle wxPython objects.[’new_array’.150)array 1 times # speed in python: 0. 50. i++) for(int j = 0. dc->SetPen(wxPen(*blue. wxNORMAL)) dc.10. but they also are sloooow to compile (20 seconds or more for this snippet). This provides another factor of 2 improvement.’a_2d’].SciPy Reference Guide. SciPy Tutorial .’blue’..50. dc->DrawRectangle(15.NumPy_to_blitz_type_mapping[type] code = \ """ for(int i = 0. Release 0.py # Cast/Copy/Transposing (150. dc): red = wxNamedColour("RED").i < _Na_2d[0]. fast code. Thats nice in and of itself.870999932289 # speed in c: 0.4. dc->SetPen(wxPen(*red. """ inline(code.5.’red’.compiler=’gcc’) return new_array This example uses blitz++ arrays instead of the standard representation of NumPy arrays so that indexing is simplier to write. but it also demonstrates that the type conversion mechanism is reasonably ﬂexible.j) = (%s) a_2d(j. type_factories = blitz_type_factories.).4.wxSOLID)).0rc1 assert(len(shape(a_2d)) == 2) new_array = zeros(shape(a_2d).py" dc->BeginDrawing(). Chances are. 0x20. inline() is also forced to use ‘gcc’ as the compiler because the default compiler on Windows (MSVC) will not compile blitz code. (‘gcc’ I think will use the standard compiler on Unix machine instead of explicitly forcing gcc (check this)) Comparisons of the Python vs inline C++ code show a factor of 3 speed up.48 # inplace transpose c: 0.type) NumPy_type = scalar_spec. accept that it mixes inline C code in the middle of the drawing function.py borrows the scrolled window example from the wxPython demo.129999995232 # speed up: 6. dc->DrawRectangle(5.50). grey_brush = wxLIGHT_GREY_BRUSH. Blitz++ arrays allow you to write clean. #C:\home\ej\wrk\scipy\weave\examples> python cast_copy_transpose. Also shown are the results of an “inplace” transpose routine that can be used if the output of the linear algebra routine can overwrite the original matrix (this is often appropriate).wxSOLID)). This is accomplished by passing in the blitz++ “type factories” to override the standard Python to C++ type conversions.i).SetTextForeground(wxColour(0xFF.. j < _Na_2d[1]. j++) new_array(i. blue = wxNamedColour("BLUE"). code = \ """ #line 108 "wx_example. """ % NumPy_type inline(code.[’dc’.’grey_brush’]) dc. 15. wxNORMAL. 0xFF)) 110 Chapter 1.SetFont(wxFont(14.25 # speed up: 3. def DoDrawing(self. it won’t take a ton of effort to support special types you might have. dc->SetBrush(*grey_brush). 50). wxSWISS. The examples/wx_example.

verbose = 0. The following is a formatted cut/paste of the argument section of inline’s doc-string. force = 0. 4)) dc. 1. the keyword arguments for distutils extension modules are accepted to specify extra information needed for compiling. It should not specify a return statement. are not binary compatible for C++. A string of valid C++ code.14.. On windows. This is really only useful for debugging. A list of Python variable names that should be transferred from Python into the C/C++ code. That should sufﬁciently scare people into not even looking at this. it is a dictionary of values that should be used as the global scope for the C/C++ code. Also. to specify that inline use the same compiler that was used to build wxPython to be on the safe side. If global_dict is not speciﬁed the global dictionary of the calling function is used. Weave (scipy. support_code = None.SciPy Reference Guide. There isn’t any beneﬁt. dictionary.SetPen(wxPen(wxNamedColour(’VIOLET’). Also. You might want to use this if you have a computationally intensive loop in your drawing code that you want to speed up. Instead it should assign results that need to be returned to Python in the return_val. local_dict optional. On windows. Inline Arguments code string. the C++ code is compiled every time inline is called. It also takes keyword arguments that are passed to distutils as compiler options. I think there is a way around this. There isn’t currently a way to learn this info from the library – you just have to know. dictionary. it’ll only understand the values understoof by distutils. If this isn’t available. **kw): inline has quite a few options as listed below. 65) dc. In fact. If speciﬁed. There. auto_downcast=1. it understands ‘msvc’ and ‘gcc’ as well as all the compiler names understood by distutils. If speciﬁed. while binary compatible in C. The name of compiler to use when compiling. 65+te[1]) . it is a dictionary of values that should be used as the local scope for the C/C++ code. It explains all of the variables. If 1.py for your machine’s conﬁguration to point at the correct directories etc. 65+te[1]. and probably only useful if you’re editing support_code a lot. you’ll have to use the MSVC compiler if you use the standard wxPython DLLs distributed by Robin Dunn.. 60. string. 60+te[0]. :) Keyword Option The basic deﬁnition of the inline() function has a slew of optional variables. On Unix. Release 0.. some of the Python calls to wx objects were just converted to C++ calls. arg_names list of strings. it just demonstrates the capabilities. customize=None. On windows.local_dict = None. at least on the windows platform. global_dict optional. 0 or 1. Thats because MSVC and gcc.10. If local_dict is not speciﬁed the local dictionary of the calling function is used. its probably best. no matter what platform you’re on.DrawText("Hello World".0rc1 te = dc. force optional. it looks for mingw32 (the gcc compiler). type_factories = None. but I haven’t found it yet – I get some linking errors dealing with wxString. (I should add ‘gcc’ though to this). the compiler defaults to the Microsoft C++ compiler. One ﬁnal note.GetTextExtent("Hello World") dc. default 0. You’ll probably have to tweak weave/wx_spec.weave) 111 . global_dict = None.py or weave/wx_info. you’ll need to install the wxWindows libraries and link to them.arg_names. compiler=’’. Here.. def inline(code. Some examples using various options will follow. compiler optional.DrawLine(5.

specify them here. 0 is silent (except on windows with msvc where it still prints some garbage).. 0. The following descriptions have been copied from Greg Ward’s distutils. For platforms and compilers where “command line” makes sense. auto_downcast optional. These guys are what convert Python data types to C/C++ data types. or 2. list of type speciﬁcation factories. platform. SciPy Tutorial .and compiler-speciﬁc information to use when linking object ﬁles together to create the extension (or to create a new static Python interpreter). Speﬁcies how much much information is printed during the compile phase of inlining code. Distutils keywords inline() also accepts a number of distutils keywords for controlling how the code is compiled. code and arg_names are the only arguments that need to be speciﬁed. in Unix form (slash.. ﬁnishes. In the simplest (most) cases. Keyword Option Examples We’ll walk through several examples here to demonstrate the behavior of inline and also how the various arguments are used. Note: The module_path ﬁle is always appended to the front of this list include_dirs [string] list of directories to search for C/C++ header ﬁles (in Unix form for portability) deﬁne_macros [(name : string. value : string|None)] list of macros to deﬁne. (not sure this’ll be used much). binary resource ﬁles.extension. """ >>> inline(code. C++.separated) for portability.SciPy Reference Guide. base_info. SWIG (. 0 or 1. customize optional. Cygwin’s behavior should be similar.0rc1 On Unix. If even one of the arrays has type double or double complex. return_val = Py::new_reference_to(Py::Int(l)). default 1. it’ll probably use the same compiler that was used when compiling Python. type_factories optional. verbose optional. etc. int l = a. Source ﬁles may be C. or structures. or whatever else is recognized by the “build_ext” command as source for a Python extension. .1. 2 prints out the command lines for the compilation process and can be useful if you’re having problems getting code to work.strings for convenience: sources [string] list of source ﬁlenames.. this is when the extension is loaded) extra_objects [string] list of extra ﬁles to link with (eg. An alternative way to speciﬁy support_code. relative to the distribution root (where the setup script lives).. this is typically a list of command-line arguments.. where ‘value’ is either the string to deﬁne it to or None to deﬁne it without a particular value (equivalent of “#deﬁne FOO” in source or -DFOO on Unix C compiler command line) undef_macros [string] list of macros to undeﬁne explicitly library_dirs [string] list of directories to search for C/C++ libraries at link time libraries [string] list of library names (not ﬁlenames or paths) to link against runtime_library_dirs [string] list of directories to search for C/C++ libraries at run time (for shared extensions. headers.cpp ﬁle if you need to examine it.custom_info object. each macro is deﬁned using a 2-tuple.[’a’]) sc_86e98826b65b047ffd2cd5f479c627f12. and how long it took. defualt 0. but for other platforms it could be anything. static library that must be explicitly speciﬁed. etc. object ﬁles not implied by ‘sources’. .i).and compiler-speciﬁc information to use when compiling the source ﬁles in ‘sources’.speciﬁc resource ﬁles. Not used on all platforms. This only affects functions that have Numeric arrays as input variables. This could be declarations of functions. support_code optional. extra_link_args [string] any extra platform. classes.base_info module for more details. verbose has no affect if the compilation isn’t necessary. Release 0. 1 informs you when compiling starts. and not generally necessary for Python extensions.10. A string of valid C++ code declaring extra code that might be needed by your compiled function. Its handy for ﬁnding the name of the . needed by the function see the weave. string.length(). >>> from weave import inline >>> a = ’string’ >>> code = """ . all variables maintain there standard types.. Similar interpretation as for ‘extra_compile_args’. If you’d like to use a different set of type conversions than the default. Look in the type conversions section of the main documentation for examples. export_symbols [string] list of symbols to be exported from a shared extension.) extra_compile_args [string] any extra platform.Extension class doc.cpp 112 Chapter 1. Here’s a simple example run on Windows machine that has Microsoft VC++ installed. which typically export exactly one symbol: “init” + extension_name. Setting this to 1 will cause all ﬂoating point values to be cast as ﬂoat instead of double if all the NumPy arrays are of type ﬂoat.

return_val = Py::new_reference_to(Py::Int(l)). There is a short pause as it looks up and loads the function.[’a’]) Notice this time. You can specify the local and global dictionaries if you’d like (much like exec or eval() in Python).0rc1 Creating library C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\sc_86e98826b65b047ffd2cd5f47 and object C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\sc_86e98826b65b047ff d2cd5f479c627f12. I use verbose sometimes for debugging.[’a’].. is generated from the md5 check sum of the C/C++ code fragment. The verbose ﬂag is added to show what is printed out: >>>inline(code.10. and run the same code with a different string.verbose=2. When set to 2.cpp.compiler=’gcc’. On the second call. it is just as easy to change the code by a single character (like adding a space some place) to force the recompile. Because the example has already been compiled.[’a’].cpp ﬁle) that you’d expect from running a make ﬁle.force=1) running build_ext 1. it’ll output all the information (including the name of the . This is nice if you need to examine the generated code to see where things are going haywire.SciPy Reference Guide. the “expected” ones are used – i. The force option was added so that one could force a recompile when tinkering with other variables... sc_bighonkingnumber. On Unix or windows machines with only gcc installed. >>> >>> >>> . changing any of the other options in inline does not force a recompile. the code fragment is not compiled since it already exists. inline() did not recompile the code because it found the compiled function in the persistent catalog of functions. Note that error messages from failed compiles are printed to the screen even if verbose is set to 0. Weave (scipy.[’a’]) 6 When inline is ﬁrst run. This is accomplished through a little call frame trickery.my_dict) Everytime. The name of the extension ﬁle. the trash will not appear. you’ll notice that pause and some trash printed to the screen. and only the answer is returned.[’a’]) inline(code. the code is changed.weave) 113 . The following example demonstrates using gcc instead of the standard msvc compiler on windows using same code fragment as above.. However. Note: It also might be nice to add some methods for purging the cache and on disk catalogs. but it is much shorter than compiling would require. the ones from the function that called inline().length()..e. Here is an example where the local_dict is speciﬁed using the same code example from above: >>> >>> >>> >>> 15 >>> 21 a = ’a longer string’ b = ’an even longer string’ my_dict = {’a’:b} inline(code. . >>> 15 from weave import inline a = ’a longer string’ code = """ int l = a. In practice. Release 0. . the force=1 ﬂag is needed to make inline() ignore the previously compiled version and recompile using gcc. """ inline(code.14..exp 6 >>> inline(code. Now kill the interpreter and restart. The “trash” is acutually part of the compilers output that distutils does not supress. inline does a recompile. but if they aren’t speciﬁed.

.. Here’s an example of deﬁning a separate method for calculating the string length: >>> >>> >>> . Release 0. support_code = support_code) customize is a left over from a previous way of specifying compiler options. It is a custom_info object that can 114 Chapter 1.o C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\cxxsupport.. but I don’t know yet. So.cxx (C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\cxx_extensions. C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\cxxextensions. finished compiling (sec): 6. SciPy Tutorial .. >>>inline(code. >>> . The support_code argument is likely to be used a lot. Note that changes to support_code do not force a recompile.95. you’ll need to alter code in some way or use the force argument to get the code to recompile. skipping C:\home\ej\wrk\scipy\weave\CXX\cxxextensions..[’a’].SciPy Reference Guide. structure or class deﬁnitions that you want to use in the code string. 15 from weave import inline a = ’a longer string’ support_code = """ PyObject* length(Py::String a) { int l = a.o up-to-date) skipping C:\home\ej\wrk\scipy\weave\CXX\IndirectPythonInterface.cxx (C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\cxxsupport.compiler=’gcc’.o up-to-date) skipping C:\home\ej\wrk\scipy\weave\CXX\cxxsupport.exe --driver-name g++ -mno-cygwin -mdll -static --output-lib C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\libsc_86e98826b65b047ffd2cd5f479c627f13 C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\sc_86e98826b65b047ffd2cd5f479c627f13. It allows you to specify extra code fragments such as function.cxx (C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\indirectpythoninterface.[’a’].o -LC:\Python21\libs -lpython21 -o C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\sc_86e98826b65b047ffd2cd5f479c627f13.".0rc1 building ’sc_86e98826b65b047ffd2cd5f479c627f13’ extension c:\gcc-2.exe -mno-cygwin -mdll -O2 -w -Wstrict-prototypes -IC: \home\ej\wrk\scipy\weave -IC:\Python21\Include -c C:\DOCUME~1\eric\LOCAL S~1\Temp\python21_compiled\sc_86e98826b65b047ffd2cd5f479c627f13.pyd 15 That’s quite a bit of output..c (C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\cxxextensions. .de -sC:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\sc_86e98826b65b047ffd2cd5f479c627f13. It may have use on Unix also... } """ inline("return_val = length(a).force=1) Compiling code. .2\bin\dllwrap. I usually just add some inocuous whitespace to the end of one of the lines in code somewhere..95..00800001621 15 Note: I’ve only used the compiler option for switching between ‘msvc’ and ‘gcc’ on windows.. The catalog only relies on code (for performance reasons) to determine whether recompiling is necessary.o up-to-date) writing C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\sc_86e98826b65b047ffd2cd5f479c6 c:\gcc-2. return Py::new_reference_to(Py::Int(l)). if you make a change to support_code..10.length().cpp -o C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\sc_86e98826b65b04ffd2cd5f479c627f13.o C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\indirectpythoninterface.o C:\DOCUME~1\eric\LOCALS~1\Temp\python21_compiled\temp\Release\cxx_extensions.2\bin\g++.o up-to-date) skipping C:\home\ej\wrk\scipy\weave\CXX\cxx_extensions. . verbose=1 just prints the compile time.. .verbose=1.....

the desired behavior would be something like: # THIS DOES NOT WORK >>> a = 1 >>> weave. I think customize may be obsolete.14.".[’a’]) >>> a 1 Variables are passed into C++ as if you are calling a Python function. This is the default behavior. it’ll get axed in the next version.10. auto_downcast handles one of the big type conversion issues that is common when using NumPy arrays in conjunction with Python scalar values. These info objects are the standard way of deﬁning compile information for type conversion classes. If not.ptr(). you’ll need to assign the new value to the “magic” variable return_val in C++. We’ll see if anyone cares to use it.[’a’]) print a 2] So modiﬁcations to the contents of mutable types in C++ are seen when control is returned to Python. and the support_code option.inline("a++. the result is upcast to a double precision array because the scalar value is double precision. The type_factories variable is important to people who want to customize the way arguments are converted from Python to C.". This is possible by returning a tuple. Python’s calling convention is sometimes called “pass by assignment”.PyInt_FromLong(3)). If you need to make changes to an immutable variable.0.".inline("a++. if inline were to completely live up to its name. a= [1.0rc1 specify quite a bit of information about how a ﬁle is compiled.[’a’]) >>> a 2 The return_val variable can also be used to return newly created values. when looking at variables with mutable types.". Thus. auto_downcast goes some distance towards changing the casting precedence of arrays and scalars.inline("return_val = Py::new_reference_to(Py::Int(a+1)). If you want all values to keep there default type.SciPy Reference Guide. And.2] weave. Release 0. it will automatically downcast all scalar values from double to single precision when they are passed into the C++ code. For example. The following trivial example illustrates how this can be done: 1. Weave (scipy. Modiﬁcations to immutable types such as tuples. If your only using single precision arrays. We’ll talk about this in the next chapter xx of this document when we discuss type conversions.weave) 115 . Between these keywords. This is not usually the desired behavior because it can double your memory usage. strings. This value is returned by the inline() function: >>> a = 1 >>> a = weave.inline("PyList_SetItem(a. any changes made to c_a are not reﬂected in Python’s a variable. I don’t think they are as handy here. any modiﬁcations to variables in the C++ code would be reﬂected in the Python variables when control was passed back to Python. and numbers do not alter the Python variables. Returning Values Python variables in the local and global scope transfer seemlessly from Python into the C++ snippets. however. Changes made in C++ to the contents of mutable types are reﬂected in the Python variables. set auto_downcast to 0. However. especially since we’ve exposed all the keyword arguments that distutils can handle. If you have an array of single precision values and multiply that array by a Python scalar.[’a’]) >>> a 2 Instead you get: >>> a = 1 >>> weave. Things do get a little more confusing. This means its as if a c_a = a assignment is made right before inline call is made and the c_a variable is used within the C++ code. >>> >>> >>> [3.

Can someone “in the know” conﬁrm or correct this? Another thing I’d like to know is whether there is a way to write to the local namespace of another stack frame from C/C++. something so trivial has no reason to be written in C anyway. It seems like it would be trivial.a). py_a = NULL. after the calculations were ﬁnished to then insert the new values back into the locals() and globals() dictionaries so that the modiﬁed values were reﬂected in Python. return_val = results. the locals() dictionary is not writable. I’m guessing local lookups don’t always look at a dictionary to ﬁnd values. Release 0.inline(’printf("%d\\n". A quick look at the code weave generates a C++ ﬁle holding an extension function for each inline code snippet. PyObject *py__locals = NULL. then. The C version here is about 7-10 times slower than the Python version. results[0] = 1. as pointed out by the Python manual. int exception_occured = 0. the actual compiled function is pretty simple.[’a’]) 1 And here is the extension function generated by inline: static PyObject* compiled_func(PyObject*self. PyObject* args) { py::object return_val.inline(code) The example is available in examples/tuple_return. These ﬁle names are generated using from the md5 signature of the code snippet and saved to a location speciﬁed by the PYTHONCOMPILED environment variable (discussed later). Unfortunately. and then calculates using them. Below is the familiar printf example: >>> import weave >>> a = 1 >>> weave.py.10. results[1] = "2nd". The issue with locals() inline passes the locals() and globals() dictionaries from Python into the C++ function from the calling function.0rc1 # python version def multi_return(): return 1. def c_multi_return(): code = """ py::tuple results(2). Of course. I suspect locals() is not writable because there are some optimizations done to speed lookups of the local namespace. The cpp ﬁles are generally about 200-400 lines long and include quite a few functions to support type conversions. 116 Chapter 1. However.SciPy Reference Guide. I don’t think we’ll get to the point of creating variables in Python for variables created in C – although I suppose with a C/C++ parser you could do that also. PyObject *py__globals = NULL. It extracts the variables that are used in the C++ code from these dictionaries. etc. I think this goes a long way toward making inline truely live up to its name. """ return inline_tools. PyObject *py_a. If so.’. it would be possible to have some clean up code in compiled functions that wrote ﬁnal values of variables in C++ back to the correct Python stack frame. SciPy Tutorial . It also has the dubious honor of demonstrating how much inline() can slow things down. ’2nd’ # C version. converts then to C++ variables.

a).raw_globals). If all is well."int". int a = convert_to_int(py_a. The lookups. Release 0.14. if(PyDict_Check(py_obj)) return "dict"."_globals").10. an exception is raised.var_name). } Every inline function takes exactly two arguments – the local and global dictionaries for the current scope. name). exception_occured = 1.good_type. sprintf(msg..&py__locals. try { PyObject* raw_locals = py_to_raw_dict(py__locals. if(PyString_Check(py_obj)) return "string"."OO:compiled_func". /*I would like to fill in changed locals and globals here. throw Py::TypeError(msg).) { return_val = py::object().0rc1 if(!PyArg_ParseTuple(args. find_type(py_obj). } /* cleanup code */ if(!(PyObject*)return_val && !exception_occured) { return_val = Py_None. along with all inline code execution. 1. are done within a C++ try block. it calls the Python API to convert the value to an int. } char* find_type(PyObject* py_obj) { if(py_obj == NULL) return "C NULL value". Weave (scipy. All variable values are looked up out of these dictionaries. /* argument conversion code */ py_a = get_variable("a".. return (int) PyInt_AsLong(py_obj)."received ’%s’ type instead of ’%s’ for variable ’%s’".. The C++ exception is automatically converted to a Python exception by SCXX and returned to Python. The py_to_int() function illustrates how the conversions and exception handling works. PyObject* raw_globals = py_to_raw_dict(py__globals.*/ } catch(. int py_to_int(PyObject* py_obj."a").&py__globals)) return NULL. it calls handle_bad_type() which gathers information about what went wrong and then raises a SCXX TypeError which returns to Python as a TypeError. if(PyInt_Check(py_obj)) return "int".weave) 117 . } return return_val. Otherwise. if(PyCallable_Check(py_obj)) return "callable".. } void handle_bad_type(PyObject* py_obj.SciPy Reference Guide. if(PyFloat_Check(py_obj)) return "float".char* name) { if (!py_obj || !PyInt_Check(py_obj)) handle_bad_type(py_obj. char* good_type. char* var_name) { char msg[500]. or there is an error converting a Python variable to the appropriate type in C++. py_to_int ﬁrst checks that the given PyObject* pointer is not NULL and is a Python integer.disown(). If the variables aren’t found. /* inline code */ /* NDARRAY API VERSION 90907 */ printf("%d\n".raw_locals."_locals").

Generating the C/C++ code is handled by ext_function and ext_module classes and . this section covers items 1 and 4 from the list. This skips the clean up section of the extension function. So. Type Conversions Note: Maybe xxx_converter instead of xxx_specification is a more descriptive name. Cataloging is pretty simple in concept. Type conversions are customizable by the user if needed. inline() makes the following type conversions between Python and C++ types. but surprisingly required the most code to implement (and still likely needs some work). you can use CXX exceptions within your code. Catalog (and cache) the function for future use Items 1 and 2 above are related. It is usually a bad idea to directly return from your code. //should probably do more interagation (and thinking) on these. if(PyTuple_Check(py_obj)) return "tuple". Passing Variables in/out of the C/C++ code Note: Passing variables into the C code is pretty straight forward. Technical Details There are several main steps to using C/C++ code withing Python: 1. Some customizations were needed.10. there isn’t any clean up code. if(PyCallable_Check(py_obj)) return "callable". In this simple example. either uses exceptions or set return_val to NULL and use if/then’s to skip code after errors. but there are subtlties to how variable modiﬁcations in C are returned to Python. To avoid this. there may be some reference counting that needs to be taken care of here on converted variables. For the most part. if(PyCallable_Check(py_obj) && PyInstance_Check(py_obj)) return "callable". Release 0. Generating C/C++ code 3. Type conversion 2. 118 Chapter 1. Compile the code to an extension module 4. return "unkown type". Item 2 is covered later in the chapter covering the ext_tools module. if(PyInstance_Check(py_obj)) return "instance". even if an error occurs.SciPy Reference Guide. if(PyFile_Check(py_obj)) return "file". Might change in future version? By default. and distutils is covered by a completely separate document xxx. SciPy Tutorial . but in more complicated examples. but they were relatively minor and do not require changes to distutils itself.0rc1 if(PyList_Check(py_obj)) return "list". if(PyModule_Check(py_obj)) return "module". Understanding them is pretty important for anything beyond trivial uses of inline. but most easily discussed separately. see Returning Values for a more thorough discussion of this issue. } Since the inline is also executed within the try/catch block. compiling the code is handled by distutils.

10. NumPy Argument Conversion Integer.SciPy Reference Guide. ﬂoating point. Below is the basic template for the function.inline("").weave) 119 . return (int) PyInt_AsLong(py_obj).raw_locals. This is actually the exact code that is generated by calling weave. Weave (scipy. std:: is the namespace of the standard library in C++.14.3: Default Data Type Conversions Python int ﬂoat complex string list dict tuple ﬁle callable instance numpy."int".0rc1 Table 1. Release 0.check this)."a").. } Similarly. • Hopefully VTK will be added to the list soon Python to C++ conversions ﬁll in code in several locations in the generated inline extension function. The following sections demostrate how these two areas are ﬁlled in by the default conversion methods. The only thing I increase/decrease the ref count on is NumPy arrays. the ﬂoat and complex conversion routines look like: 1. The /* inline code */ section is ﬁlled with the code passed to the inline() function call.[’a’]) The argument conversion code inserted for a is: /* argument conversion code */ int a = py_to_int (get_variable("a". * Note: I’m not sure I have reference counting correct on a few of these. please let me know. name). Consider the following inline function that has a single integer variable passed in: >>> a = 1 >>> inline("".char* name) { if (!py_obj || !PyInt_Check(py_obj)) handle_bad_type(py_obj.ndarray wxXXX C++ int double std::complex py::string py::list py::dict py::tuple FILE* py::object py::object PyArrayObject* wxXXX* The Py:: namespace is deﬁned by the SCXX library which has C++ class equivalents for many Python types.raw_globals). and complex arguments are handled in a very similar fashion. If you see an issue. get_variable() reads the variable a from the local and global namespaces. The /*argument convserion code*/ and /* cleanup code */ sections are ﬁlled with code that handles conversion from Python to C++ types and code that deallocates memory or manipulates reference counts before the function returns. Note: • I haven’t ﬁgured out how to handle long int yet (I think they are currenlty converted to int . py_to_int() has the following form: static int py_to_int(PyObject* py_obj.

char* name) { if (!py_obj || !PyFloat_Check(py_obj)) handle_bad_type(py_obj.char* name) { if (!PyString_Check(py_obj)) handle_bad_type(py_obj."float". name).SciPy Reference Guide."a"). } static Py::Tuple py_to_tuple(PyObject* py_obj.char* name) { if (!py_obj || !PyDict_Check(py_obj)) handle_bad_type(py_obj. name).char* name) { if (!py_obj || !PyTuple_Check(py_obj)) handle_bad_type(py_obj. For the following code.char* name) { if (!py_obj || !PyComplex_Check(py_obj)) handle_bad_type(py_obj. py_to_list() and its friends has the following form: static Py::List py_to_list(PyObject* py_obj. } 120 Chapter 1. get_variable() reads the variable a from the local and global namespaces. name).0rc1 static double py_to_float(PyObject* py_obj. } static Py::Dict py_to_dict(PyObject* py_obj. PyComplex_ImagAsDouble(py_obj)).raw_globals). return PyFloat_AsDouble(py_obj). Tuple."dict"."list".[’a’]) The argument conversion code inserted for a is: /* argument conversion code */ Py::List a = py_to_list(get_variable("a". name). String. Release 0. >>> a = [1] >>> inline("".10."tuple". return Py::List(py_obj). Lists. return std::complex(PyComplex_RealAsDouble(py_obj). } NumPy conversions do not require any clean up code. and Dictionary Conversion Strings. return Py::Dict(py_obj). return Py::String(py_obj). List. return Py::Tuple(py_obj). } static Py::String py_to_string(PyObject* py_obj. name). Tuples and Dictionary conversions are all converted to SCXX types by default. SciPy Tutorial .raw_locals. name). } static std::complex py_to_complex(PyObject* py_obj.char* name) { if (!py_obj || !PyList_Check(py_obj)) handle_bad_type(py_obj."string"."complex".

instead of >>> weave..SciPy Reference Guide. and dictionaries.weave) 121 . Its important to understand that ﬁle conversion only works on actual ﬁles – i. the following occurs: >>> buf = sys. etc.10. but since the special object doesn’t have a FILE* pointer underlying it.inline(’printf("hello\\n"). py_to_file() converts PyObject* to a FILE* and increments the reference count of the PyObject*: FILE* py_to_file(PyObject* py_obj. This can affect many things.[’buf’]) The traceback tells us that inline() was unable to convert ‘buf’ to a C++ type (If instance conversion was implemented. lists. 1. name). tuples.’) You might try: >>> buf = sys.raw_locals. It does not support converting arbitrary objects that support the ﬁle interface into C FILE* pointers. in initial printf() examples."hello\\n"). one might be tempted to solve the problem of C and Python IDE’s (PythonWin.[’a’]) The argument conversion code is: /* argument conversion code */ PyObject* py_a = get_variable("a"."a"). File Conversion For the following code.’w’) >>> inline("".’.[’buf’]) This will work as expected from a standard python interpreter.inline(’fprintf(buf."file".). return PyFile_AsFile(py_obj)."hello\\n").stdout and sys. get_variable() reads the variable a from the local and global namespaces.’. For example. >>> a = open("bob". but in PythonWin.e. FILE* a = py_to_file(py_a.stdout >>> weave.14. PyCrust. ones created using the open() command in Python. Why is this? Let’s look at what the buf object really is: >>> buf pywin. } Because the PyObject* was incremented. fprintf doesn’t know what to do with it (well this will be the problem when instance conversion is implemented.raw_globals).) writing to different stdout and stderr by using fprintf() and passing in sys.inline(’fprintf(buf.stderr. so clean up code isn’t necessary.stdout >>> weave. char* name) { if (!py_obj || !PyFile_Check(py_obj)) handle_bad_type(py_obj. Release 0. Py_INCREF(py_obj).interact.stdout to a special object that implements the Python ﬁle interface. the clean up code needs to decrement the counter /* cleanup code */ Py_XDECREF(py_a). Weave (scipy.0rc1 SCXX handles reference counts on for strings. the error would have occurred at runtime instead). For example.InteractiveView instance at 00EAD014 PythonWin has reassigned sys..framework. This works great in Python.

This requires that a new speciﬁcation class that handles strings is written 122 Chapter 1.’.raw_locals."callable". Nothing is done to there reference counts. or instances. The ﬁrst speciﬁcation in the list is used to represent the variable. Instance and Module conversion are not currently implemented. modules. Instance. Examples of xxx_specification are scattered throughout numerous “xxx_spec. get_variable() reads the variable a from the local and global namespaces. char* name) { if (!py_obj || !PyFile_Check(py_obj)) handle_bad_type(py_obj. Also. you’ll need to pass them into inline using the type_factories argument.[’a’]) Callable and instance variables are converted to PyObject*.10. name). After deﬁning speciﬁcation classes. Release 0.a).py). The py_to_callable() and py_to_instance() don’t currently increment the ref count. The ﬁrst is in determining whether a Python variable that needs to be converted should be represented by the given class. } PyObject* py_to_instance(PyObject* py_obj. return py_obj. return py_obj. but would like all other type conversions to have default behavior. /* argument conversion code */ PyObject* a = py_to_callable(get_variable("a". A type speciﬁcation class actually serve in two related but different roles."a"). header."instance".0rc1 Callable. the code snippet has to be compiled.py are a good place to start. PyObject* py_to_callable(PyObject* py_obj. name). Maybe looking over sequence_spec. Closely related to the xxx_specification classes are yyy_info classes.raw_globals).inline(’printf("%d". xxx_specification classes have one or more yyy_info classes associated with them. Customizing Conversions Converting from Python to C++ types is handled by xxx_speciﬁcation classes. char* name) { if (!py_obj || !PyCallable_Check(py_obj)) handle_bad_type(py_obj. the current best route is to examine some of the existing spec and info ﬁles. >>> def a(): pass >>> inline("". The second is as a code generator that generate C++ code needed to convert from Python to C++ types for a speciﬁc variable. the variable ‘a’ is tested against a list of type speciﬁcations (the default list is stored in weave/ext_tools.SciPy Reference Guide.py” ﬁles in the weave package. These classes contain compiler. In this process. and support code information necessary for including a certain set of capabilities (such as blitz++ or CXX support) in a compiled module. If you’d like to deﬁne your own set of type speciﬁcations. and Module Conversion Note: Need to look into how ref counts should be handled. } There is no cleanup code for callables. A lot of times you may just want to change how a speciﬁc variable type is represented. Say you’d rather have Python strings converted to std::string or maybe char* instead of using the CXX string object.py and cxx_info.[’a’]) is called for the ﬁrst time. When >>> a = 1 >>> weave. SciPy Tutorial .

a new function is compiled with the given argument types. If the cache is present. then the catalog attempts to create a directory based on your user id in the /tmp directory.14.SciPy Reference Guide.. This function is written to the on-disk catalog as well as into the in-memory cache. 3. By saving information about compiled functions to disk.weave) 123 .py has a class called catalog that helps keep track of previously compiled functions. Weave (scipy. Release 0. then the function is compiled. When inline(cpp_code) is called the following things happen: 1. On disk catalogs are stored in the same manor using standard Python shelves.10. Catalog search paths and the PYTHONCOMPILED variable The default location for catalog ﬁles on Unix is is ~/. I think you’re out of luck. the catalog looks through the on-disk function catalogs for the entries. it is created the ﬁrst time a catalog is used. If it hasn’t. 2. The catalog class also keeps an in-memory cache with a list of all the functions compiled for cpp_code. When a lookup for cpp_code fails. The following code demonstrates how this is done: . each function in the cache is called until one is found that was compiled for the correct argument types. The PYTHONCOMPILED variable determines where to search for these catalogs and in what order. This prevents inline() and related functions from having to compile functions everytime they are called. A fast local cache of functions is checked for the last function called for cpp_code. The Catalog catalog. a directory called pythonXX_compiled is created in the user’s temporary directory. catalog will check an in memory cache to see if the function has already been loaded into python. Early on. If none of the functions work. it effectively overrides the default string speciﬁcation. written to the ﬁrst writable directory in the PYTHONCOMPILED path.. On Windows. If PYTHONCOMPILED is not present several platform dependent locations are searched. If the function isn’t found in the on-disk catalog. Along 1. there was a question as to whether md5 check sums of the C++ code strings should be used instead of the actual code strings. Since it is closer to the front of the list. All functions found for cpp_code in the path are loaded into the in-memory cache with functions found earlier in the search path closer to the front of the call list. then it is loaded from disk. Function Storage Function caches are stored as dictionaries where the key is the entire C++ code string and the value is either a single function (as in the “level 1” cache) or a list of functions (as in the main catalog cache). for any reason it isn’t. If this directory doesn’t exist. then it starts searching through persisent catalogs on disk to see if it ﬁnds an entry for the given function. If cpp_code has ever been called. I think this is because it is more time consuming to compute the md5 value than it is to do look-ups of long strings in the dictionary. then this cache will be present (loaded from disk). The actual catalog ﬁle that lives in this directory is a Python shelve with a platform speciﬁc name such as “nt21compiled_catalog” so that multiple OSes can share the same ﬁle systems without trampling on each other.py ﬁle for the test run. Instead.0rc1 and then prepended to a list of the default type speciﬁcations. and also loaded into the in-memory cache. it isn’t necessary to re-compile functions everytime you stop and restart the interpreter. I don’t think either of these should ever fail though. The directory must be writable.pythonXX_compiled where XX is version of Python being used. I think this is the route inline Perl took. the entire string showed that using the entire string was at least a factor of 3 or 4 faster for Python. Look at the examples/md5_speed. The directory permissions are set so that only you have access to the directory. Functions are compiled once and stored for future use. If the cache isn’t present. If this fails. If an entry for cpp_code doesn’t exist in the cache or the cached function call fails (perhaps because the function doesn’t have compatible types) then the next step is to check the catalog. If. Some (admittedly quick) tests of the md5 vs.

a: from scipy import * # or from NumPy import * a = ones((512.1:-1] = (b[1:-1. # now average a[1:-1.1:-1] + b[2:. see a possible problem here. If you are using python21 on linux.py in site-packages has a compiled function in it. The in-module cache (in weave. but the beneﬁt is very small (less than 5%) and the utility is quite a bit less. Relative directory paths (‘.512). but compiled for different input variables.cpp and .1:-1] + b[:-2. This is useful if an admin wants to build a lot of compiled functions during the build of a package and then install them in site-packages along with the package. Float64) # . and the module bob. but there may be a few discrepancies. the new function will be stored in their default directory (or some other writable directory in the PYTHONCOMPILED path) because the user will not have write access to the site-packages directory. For most applications.8 Blitz Note: most of this section is lifted from old documentation.python21_compiled/linuxpython_compiled The default location is always included in the search path. So.. compiled expressions should provide a factor of 2-10 speed-up over NumPy arrays. then the catalog search order when calling that function for the ﬁrst time in a python session would be: /usr/lib/python21/site-packages/linuxpython_compiled /some/path/linuxpython_compiled ~/. Each function in the lists executes the same C++ code string.export PYTHONCOMPILED.14.directory such as /usr/lib/python21/sitepackages/python21_compiled/linuxpython_compiled so that library ﬁles compiled with python21 are tried to link with python22 ﬁles in some strange scenarios. b..’) separated list of directories.so or .so or . It speciﬁes that the compiled catalog should reside in the same directory as the module that called it. The catalog ﬁle simply contains keys which are the C++ code strings with values that are lists of functions. . 124 Chapter 1. it is a (‘.10.2:] + b[1:-1. These directories will be searched prior to the default directory for a compiled function catalog. Need to check this.pyd ﬁles created by inline will live in this directory.. Note. the following code fragment takes a 5 point average of the 512x512 2d image.512). 1. As an example. The function lists point at functions within these compiled modules.0rc1 with the catalog ﬁle.. weave. User’s who specify MODULE in their PYTHONCOMPILED variable will have access to these compiled functions. You can use the PYTHONCOMPILED environment variable to specify alternative locations for compiled functions. and stores it in array.1:-1] \ + b[1:-1.blitz() compiles NumPy Python expressions for fast execution. we’ll stick with a dictionary as the cache. the . Note: hmmm.pyd ﬁles are written. Release 0.SciPy Reference Guide.do some stuff to fill in b.inline_tools reduces the overhead of calling inline functions by about a factor of 2. however. Also. that if they call the function with a set of argument types that it hasn’t previously been built for. On windows.:-2]) / 5. SciPy Tutorial .’ and ‘. I should probably make a sub. There is a “special” path variable called MODULE that can be placed in the PYTHONCOMPILED variable. It should be pretty accurate. An example of using the PYTHONCOMPILED path on bash follows: PYTHONCOMPILED=MODULE:/some/path. the ﬁrst writable directory in the list is where all new compiled function catalogs.’) should work ﬁne in the PYTHONCOMPILED variable as should environement variables. It can be reduced a little more for type loop calls where the same function is called over and over again if the cache was a single value instead of a dictionary.. Using compiled expressions is meant to be as unobtrusive as possible and works much like pythons exec statement. Float64) b = ones((512. On Unix this is a colon (‘:’) separated list of directories.cpp and .

SciPy Reference Guide, Release 0.10.0rc1

To compile the expression, convert the expression to a string by putting quotes around it and then use weave.blitz:

import weave expr = "a[1:-1,1:-1] = (b[1:-1,1:-1] + b[2:,1:-1] + b[:-2,1:-1]" \ "+ b[1:-1,2:] + b[1:-1,:-2]) / 5." weave.blitz(expr)

The ﬁrst time weave.blitz is run for a given expression and set of arguements, C++ code that accomplishes the exact same task as the Python expression is generated and compiled to an extension module. This can take up to a couple of minutes depending on the complexity of the function. Subsequent calls to the function are very fast. Futher, the generated module is saved between program executions so that the compilation is only done once for a given expression and associated set of array types. If the given expression is executed with a new set of array types, the code most be compiled again. This does not overwrite the previously compiled function – both of them are saved and available for exectution. The following table compares the run times for standard NumPy code and compiled code for the 5 point averaging. Method Run Time (seconds) Standard NumPy 0.46349 blitz (1st time compiling) 78.95526 blitz (subsequent calls) 0.05843 (factor of 8 speedup) These numbers are for a 512x512 double precision image run on a 400 MHz Celeron processor under RedHat Linux 6.2. Because of the slow compile times, its probably most effective to develop algorithms as you usually do using the capabilities of scipy or the NumPy module. Once the algorithm is perfected, put quotes around it and execute it using weave.blitz. This provides the standard rapid prototyping strengths of Python and results in algorithms that run close to that of hand coded C or Fortran. Requirements Currently, the weave.blitz has only been tested under Linux with gcc-2.95-3 and on Windows with Mingw32 (2.95.2). Its compiler requirements are pretty heavy duty (see the blitz++ home page), so it won’t work with just any compiler. Particularly MSVC++ isn’t up to snuff. A number of other compilers such as KAI++ will also work, but my suspicions are that gcc will get the most use. Limitations 1. Currently, weave.blitz handles all standard mathematic operators except for the ** power operator. The built-in trigonmetric, log, ﬂoor/ceil, and fabs functions might work (but haven’t been tested). It also handles all types of array indexing supported by the NumPy module. numarray’s NumPy compatible array indexing modes are likewise supported, but numarray’s enhanced (array based) indexing modes are not supported. weave.blitz does not currently support operations that use array broadcasting, nor have any of the special purpose functions in NumPy such as take, compress, etc. been implemented. Note that there are no obvious reasons why most of this functionality cannot be added to scipy.weave, so it will likely trickle into future versions. Using slice() objects directly instead of start:stop:step is also not supported. 2. Currently Python only works on expressions that include assignment such as

>>> result = b + c + d

This means that the result array must exist before calling weave.blitz. Future versions will allow the following:

>>> result = weave.blitz_eval("b + c + d")

3. weave.blitz works best when algorithms can be expressed in a “vectorized” form. Algorithms that have a large number of if/thens and other conditions are better hand written in C or Fortran. Further, the restrictions 1.14. Weave (scipy.weave) 125

SciPy Reference Guide, Release 0.10.0rc1

imposed by requiring vectorized expressions sometimes preclude the use of more efﬁcient data structures or algorithms. For maximum speed in these cases, hand-coded C or Fortran code is the only way to go. 4. weave.blitz can produce different results than NumPy in certain situations. It can happen when the array receiving the results of a calculation is also used during the calculation. The NumPy behavior is to carry out the entire calculation on the right hand side of an equation and store it in a temporary array. This temprorary array is assigned to the array on the left hand side of the equation. blitz, on the other hand, does a “running” calculation of the array elements assigning values from the right hand side to the elements on the left hand side immediately after they are calculated. Here is an example, provided by Prabhu Ramachandran, where this happens:

# 4 point average. >>> expr = "u[1:-1, 1:-1] = (u[0:-2, 1:-1] + u[2:, 1:-1] + \ ... "u[1:-1,0:-2] + u[1:-1, 2:])*0.25" >>> u = zeros((5, 5), ’d’); u[0,:] = 100 >>> exec (expr) >>> u array([[ 100., 100., 100., 100., 100.], [ 0., 25., 25., 25., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.]]) >>> u = zeros((5, 5), ’d’); u[0,:] = 100 >>> weave.blitz (expr) >>> u array([[ 100. , 100. , 100. [ 0. , 25. , 31.25 [ 0. , 6.25 , 9.375 [ 0. , 1.5625 , 2.734375 [ 0. , 0. , 0.

, , , , ,

100. , 32.8125 , 10.546875 , 3.3203125, 0. ,

100. ], 0. ], 0. ], 0. ], 0. ]])

**You can prevent this behavior by using a temporary array.
**

>>> u = zeros((5, 5), ’d’); u[0,:] = 100 >>> temp = zeros((4, 4), ’d’); >>> expr = "temp = (u[0:-2, 1:-1] + u[2:, 1:-1] + "\ ... "u[1:-1,0:-2] + u[1:-1, 2:])*0.25;"\ ... "u[1:-1,1:-1] = temp" >>> weave.blitz (expr) >>> u array([[ 100., 100., 100., 100., 100.], [ 0., 25., 25., 25., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.], [ 0., 0., 0., 0., 0.]])

5. One other point deserves mention lest people be confused. weave.blitz is not a general purpose Python->C compiler. It only works for expressions that contain NumPy arrays and/or Python scalar values. This focused scope concentrates effort on the compuationally intensive regions of the program and sidesteps the difﬁcult issues associated with a general purpose Python->C compiler. NumPy efﬁciency issues: What compilation buys you Some might wonder why compiling NumPy expressions to C++ is beneﬁcial since operations on NumPy array operations are already executed within C loops. The problem is that anything other than the simplest expression are executed in less than optimal fashion. Consider the following NumPy expression:

126

Chapter 1. SciPy Tutorial

SciPy Reference Guide, Release 0.10.0rc1

a = 1.2 * b + c * d

**When NumPy calculates the value for the 2d array, a, it does the following steps:
**

temp1 = 1.2 * b temp2 = c * d a = temp1 + temp2

Two things to note. Since c is an (perhaps large) array, a large temporary array must be created to store the results of 1.2 * b. The same is true for temp2. Allocation is slow. The second thing is that we have 3 loops executing, one to calculate temp1, one for temp2 and one for adding them up. A C loop for the same problem might look like:

for(int i = 0; i < M; i++) for(int j = 0; j < N; j++) a[i,j] = 1.2 * b[i,j] + c[i,j] * d[i,j]

Here, the 3 loops have been fused into a single loop and there is no longer a need for a temporary array. This provides a signiﬁcant speed improvement over the above example (write me and tell me what you get). So, converting NumPy expressions into C/C++ loops that fuse the loops and eliminate temporary arrays can provide big gains. The goal then,is to convert NumPy expression to C/C++ loops, compile them in an extension module, and then call the compiled extension function. The good news is that there is an obvious correspondence between the NumPy expression above and the C loop. The bad news is that NumPy is generally much more powerful than this simple example illustrates and handling all possible indexing possibilities results in loops that are less than straight forward to write. (take a peak in NumPy for conﬁrmation). Luckily, there are several available tools that simplify the process. The Tools weave.blitz relies heavily on several remarkable tools. On the Python side, the main facilitators are Jermey Hylton’s parser module and Travis Oliphant’s NumPy module. On the compiled language side, Todd Veldhuizen’s blitz++ array library, written in C++ (shhhh. don’t tell David Beazley), does the heavy lifting. Don’t assume that, because it’s C++, it’s much slower than C or Fortran. Blitz++ uses a jaw dropping array of template techniques (metaprogramming, template expression, etc) to convert innocent looking and readable C++ expressions into to code that usually executes within a few percentage points of Fortran code for the same problem. This is good. Unfortunately all the template raz-ma-taz is very expensive to compile, so the 200 line extension modules often take 2 or more minutes to compile. This isn’t so good. weave.blitz works to minimize this issue by remembering where compiled modules live and reusing them instead of re-compiling every time a program is re-run. Parser Tearing NumPy expressions apart, examining the pieces, and then rebuilding them as C++ (blitz) expressions requires a parser of some sort. I can imagine someone attacking this problem with regular expressions, but it’d likely be ugly and fragile. Amazingly, Python solves this problem for us. It actually exposes its parsing engine to the world through the parser module. The following fragment creates an Abstract Syntax Tree (AST) object for the expression and then converts to a (rather unpleasant looking) deeply nested list representation of the tree.

>>> import parser >>> import scipy.weave.misc >>> ast = parser.suite("a = b * c + d") >>> ast_list = ast.tolist() >>> sym_list = scipy.weave.misc.translate_symbols(ast_list) >>> pprint.pprint(sym_list) [’file_input’, [’stmt’, [’simple_stmt’, [’small_stmt’,

1.14. Weave (scipy.weave)

127

SciPy Reference Guide, Release 0.10.0rc1

[’expr_stmt’, [’testlist’, [’test’, [’and_test’, [’not_test’, [’comparison’, [’expr’, [’xor_expr’, [’and_expr’, [’shift_expr’, [’arith_expr’, [’term’, [’factor’, [’power’, [’EQUAL’, ’=’], [’testlist’, [’test’, [’and_test’, [’not_test’, [’comparison’, [’expr’, [’xor_expr’, [’and_expr’, [’shift_expr’, [’arith_expr’, [’term’, [’factor’, [’power’, [’STAR’, ’*’], [’factor’, [’power’, [’PLUS’, ’+’], [’term’, [’factor’, [’power’, [’NEWLINE’, ’’]]], [’ENDMARKER’, ’’]]

[’atom’, [’NAME’, ’a’]]]]]]]]]]]]]]],

[’atom’, [’NAME’, ’b’]]]], [’atom’, [’NAME’, ’c’]]]]],

[’atom’, [’NAME’, ’d’]]]]]]]]]]]]]]]]],

Despite its looks, with some tools developed by Jermey H., its possible to search these trees for speciﬁc patterns (subtrees), extract the sub-tree, manipulate them converting python speciﬁc code fragments to blitz code fragments, and then re-insert it in the parse tree. The parser module documentation has some details on how to do this. Traversing the new blitziﬁed tree, writing out the terminal symbols as you go, creates our new blitz++ expression string. Blitz and NumPy The other nice discovery in the project is that the data structure used for NumPy arrays and blitz arrays is nearly identical. NumPy stores “strides” as byte offsets and blitz stores them as element offsets, but other than that, they are the same. Further, most of the concept and capabilities of the two libraries are remarkably similar. It is satisfying that two completely different implementations solved the problem with similar basic architectures. It is also fortuitous. The work involved in converting NumPy expressions to blitz expressions was greatly diminished. As an example, consider the code for slicing an array in Python with a stride:

>>> a = b[0:4:2] + c >>> a [0,2,4]

**In Blitz it is as follows:
**

Array<2,int> b(10); Array<2,int> c(3); // ... Array<2,int> a = b(Range(0,3,2)) + c;

128

Chapter 1. SciPy Tutorial

SciPy Reference Guide, Release 0.10.0rc1

Here the range object works exactly like Python slice objects with the exception that the top index (3) is inclusive where as Python’s (4) is exclusive. Other differences include the type declaraions in C++ and parentheses instead of brackets for indexing arrays. Currently, weave.blitz handles the inclusive/exclusive issue by subtracting one from upper indices during the translation. An alternative that is likely more robust/maintainable in the long run, is to write a PyRange class that behaves like Python’s range. This is likely very easy. The stock blitz also doesn’t handle negative indices in ranges. The current implementation of the blitz() has a partial solution to this problem. It calculates and index that starts with a ‘-‘ sign by subtracting it from the maximum index in the array so that:

upper index limit /-----\ b[:-1] -> b(Range(0,Nb[0]-1-1))

This approach fails, however, when the top index is calculated from other values. In the following scenario, if i+j evaluates to a negative value, the compiled code will produce incorrect results and could even core- dump. Right now, all calculated indices are assumed to be positive.

b[:i-j] -> b(Range(0,i+j))

A solution is to calculate all indices up front using if/then to handle the +/- cases. This is a little work and results in more code, so it hasn’t been done. I’m holding out to see if blitz++ can be modiﬁed to handle negative indexing, but haven’t looked into how much effort is involved yet. While it needs ﬁxin’, I don’t think there is a ton of code where this is an issue. The actual translation of the Python expressions to blitz expressions is currently a two part process. First, all x:y:z slicing expression are removed from the AST, converted to slice(x,y,z) and re-inserted into the tree. Any math needed on these expressions (subtracting from the maximum index, etc.) are also preformed here. _beg and _end are used as special variables that are deﬁned as blitz::fromBegin and blitz::toEnd.

a[i+j:i+j+1,:] = b[2:3,:]

**becomes a more verbose:
**

a[slice(i+j,i+j+1),slice(_beg,_end)] = b[slice(2,3),slice(_beg,_end)]

The second part does a simple string search/replace to convert to a blitz expression with the following translations:

slice(_beg,_end) slice [ ] _stp -> -> -> -> -> _all # not strictly needed, but cuts down on code. blitz::Range ( ) 1

_all is deﬁned in the compiled function as blitz::Range.all(). These translations could of course happen directly in the syntax tree. But the string replacement is slightly easier. Note that name spaces are maintained in the C++ code to lessen the likelyhood of name clashes. Currently no effort is made to detect name clashes. A good rule of thumb is don’t use values that start with ‘_’ or ‘py_’ in compiled expressions and you’ll be ﬁne. Type deﬁnitions and coersion So far we’ve glossed over the dynamic vs. static typing issue between Python and C++. In Python, the type of value that a variable holds can change through the course of program execution. C/C++, on the other hand, forces you to declare the type of value a variables will hold prior at compile time. weave.blitz handles this issue by examining the types of the variables in the expression being executed, and compiling a function for those explicit types. For example:

1.14. Weave (scipy.weave)

129

SciPy Reference Guide, Release 0.10.0rc1

a = ones((5,5),Float32) b = ones((5,5),Float32) weave.blitz("a = a + b")

When compiling this expression to C++, weave.blitz sees that the values for a and b in the local scope have type Float32, or ‘ﬂoat’ on a 32 bit architecture. As a result, it compiles the function using the ﬂoat type (no attempt has been made to deal with 64 bit issues). What happens if you call a compiled function with array types that are different than the ones for which it was originally compiled? No biggie, you’ll just have to wait on it to compile a new version for your new types. This doesn’t overwrite the old functions, as they are still accessible. See the catalog section in the inline() documentation to see how this is handled. Sufﬁce to say, the mechanism is transparent to the user and behaves like dynamic typing with the occasional wait for compiling newly typed functions. When working with combined scalar/array operations, the type of the array is always used. This is similar to the savespace ﬂag that was recently added to NumPy. This prevents issues with the following expression perhaps unexpectedly being calculated at a higher (more expensive) precision that can occur in Python:

>>> a = array((1,2,3),typecode = Float32) >>> b = a * 2.1 # results in b being a Float64 array.

**In this example,
**

>>> a = ones((5,5),Float32) >>> b = ones((5,5),Float32) >>> weave.blitz("b = a * 2.1")

the 2.1 is cast down to a float before carrying out the operation. If you really want to force the calculation to be a double, deﬁne a and b as double arrays. One other point of note. Currently, you must include both the right hand side and left hand side (assignment side) of your equation in the compiled expression. Also, the array being assigned to must be created prior to calling weave.blitz. I’m pretty sure this is easily changed so that a compiled_eval expression can be deﬁned, but no effort has been made to allocate new arrays (and decern their type) on the ﬂy. Cataloging Compiled Functions See The Catalog section in the weave.inline() documentation. Checking Array Sizes Surprisingly, one of the big initial problems with compiled code was making sure all the arrays in an operation were of compatible type. The following case is trivially easy:

a = b + c

**It only requires that arrays a, b, and c have the same shape. However, expressions like:
**

a[i+j:i+j+1,:] = b[2:3,:] + c

are not so trivial. Since slicing is involved, the size of the slices, not the input arrays must be checked. Broadcasting complicates things further because arrays and slices with different dimensions and shapes may be compatible for math operations (broadcasting isn’t yet supported by weave.blitz). Reductions have a similar effect as their results are different shapes than their input operand. The binary operators in NumPy compare the shapes of their two operands just before they operate on them. This is possible because NumPy treats each operation independently. The intermediate (temporary) arrays created during sub-operations in an expression are tested for the correct shape before they are

130

Chapter 1. SciPy Tutorial

SciPy Reference Guide, Release 0.10.0rc1

combined by another operation. Because weave.blitz fuses all operations into a single loop, this isn’t possible. The shape comparisons must be done and guaranteed compatible before evaluating the expression. The solution chosen converts input arrays to “dummy arrays” that only represent the dimensions of the arrays, not the data. Binary operations on dummy arrays check that input array sizes are comptible and return a dummy array with the size correct size. Evaluating an expression of dummy arrays traces the changing array sizes through all operations and fails if incompatible array sizes are ever found. The machinery for this is housed in weave.size_check. It basically involves writing a new class (dummy array) and overloading it math operators to calculate the new sizes correctly. All the code is in Python and there is a fair amount of logic (mainly to handle indexing and slicing) so the operation does impose some overhead. For large arrays (ie. 50x50x50), the overhead is negligible compared to evaluating the actual expression. For small arrays (ie. 16x16), the overhead imposed for checking the shapes with this method can cause the weave.blitz to be slower than evaluating the expression in Python. What can be done to reduce the overhead? (1) The size checking code could be moved into C. This would likely remove most of the overhead penalty compared to NumPy (although there is also some calling overhead), but no effort has been made to do this. (2) You can also call weave.blitz with check_size=0 and the size checking isn’t done. However, if the sizes aren’t compatible, it can cause a core-dump. So, foregoing size_checking isn’t advisable until your code is well debugged. Creating the Extension Module weave.blitz uses the same machinery as weave.inline to build the extension module. The only difference is the code included in the function is automatically generated from the NumPy array expression instead of supplied by the user.

1.14.9 Extension Modules

weave.inline and weave.blitz are high level tools that generate extension modules automatically. Under the covers, they use several classes from weave.ext_tools to help generate the extension module. The main two classes are ext_module and ext_function (I’d like to add ext_class and ext_method also). These classes simplify the process of generating extension modules by handling most of the “boiler plate” code automatically. Note: inline actually sub-classes weave.ext_tools.ext_function to generate slightly different code than the standard ext_function. The main difference is that the standard class converts function arguments to C types, while inline always has two arguments, the local and global dicts, and the grabs the variables that need to be convereted to C from these. A Simple Example The following simple example demonstrates how to build an extension module within a Python function:

# examples/increment_example.py from weave import ext_tools def build_increment_ext(): """ Build a simple extension with functions that increment numbers. The extension will be built in the local directory. """ mod = ext_tools.ext_module(’increment_ext’) a = 1 # effectively a type declaration for ’a’ in the # following functions.

1.14. Weave (scipy.weave)

131

SciPy Reference Guide, Release 0.10.0rc1

ext_code = "return_val = Py::new_reference_to(Py::Int(a+1));" func = ext_tools.ext_function(’increment’,ext_code,[’a’]) mod.add_function(func) ext_code = "return_val = Py::new_reference_to(Py::Int(a+2));" func = ext_tools.ext_function(’increment_by_2’,ext_code,[’a’]) mod.add_function(func) mod.compile()

The function build_increment_ext() creates an extension module named increment_ext and compiles it to a shared library (.so or .pyd) that can be loaded into Python.. increment_ext contains two functions, increment and increment_by_2. The ﬁrst line of build_increment_ext(), mod = ext_tools.ext_module(‘increment_ext’) creates an ext_module instance that is ready to have ext_function instances added to it. ext_function instances are created much with a calling convention similar to weave.inline(). The most common call includes a C/C++ code snippet and a list of the arguments for the function. The following ext_code = “return_val = Py::new_reference_to(Py::Int(a+1));” ext_tools.ext_function(‘increment’,ext_code,[’a’]) creates a C/C++ extension function that is equivalent to the following Python function:

def increment(a): return a + 1

func

=

**A second method is also added to the module and then,
**

mod.compile()

is called to build the extension module. By default, the module is created in the current working directory. This example is available in the examples/increment_example.py ﬁle found in the weave directory. At the bottom of the ﬁle in the module’s “main” program, an attempt to import increment_ext without building it is made. If this fails (the module doesn’t exist in the PYTHONPATH), the module is built by calling build_increment_ext(). This approach only takes the time consuming ( a few seconds for this example) process of building the module if it hasn’t been built before.

if __name__ == "__main__": try: import increment_ext except ImportError: build_increment_ext() import increment_ext a = 1 print ’a, a+1:’, a, increment_ext.increment(a) print ’a, a+2:’, a, increment_ext.increment_by_2(a)

Note: If we were willing to always pay the penalty of building the C++ code for a module, we could store the md5 checksum of the C++ code along with some information about the compiler, platform, etc. Then, ext_module.compile() could try importing the module before it actually compiles it, check the md5 checksum and other meta-data in the imported module with the meta-data of the code it just produced and only compile the code if the module didn’t exist or the meta-data didn’t match. This would reduce the above code to:

if __name__ == "__main__": build_increment_ext() a = 1

132

Chapter 1. SciPy Tutorial

SciPy Reference Guide, Release 0.10.0rc1

print ’a, a+1:’, a, increment_ext.increment(a) print ’a, a+2:’, a, increment_ext.increment_by_2(a)

Note: There would always be the overhead of building the C++ code, but it would only actually compile the code once. You pay a little in overhead and get cleaner “import” code. Needs some thought. If you run increment_example.py from the command line, you get the following:

[eric@n0]$ python increment_example.py a, a+1: 1 2 a, a+2: 1 3

If the module didn’t exist before it was run, the module is created. If it did exist, it is just imported and used. Fibonacci Example examples/fibonacci.py provides a little more complex example of how to use ext_tools. Fibonacci numbers are a series of numbers where each number in the series is the sum of the previous two: 1, 1, 2, 3, 5, 8, etc. Here, the ﬁrst two numbers in the series are taken to be 1. One approach to calculating Fibonacci numbers uses recursive function calls. In Python, it might be written as:

def fib(a): if a <= 2: return 1 else: return fib(a-2) + fib(a-1)

**In C, the same function would look something like this:
**

int fib(int a) { if(a <= 2) return 1; else return fib(a-2) + fib(a-1); }

Recursion is much faster in C than in Python, so it would be beneﬁcial to use the C version for ﬁbonacci number calculations instead of the Python version. We need an extension function that calls this C function to do this. This is possible by including the above code snippet as “support code” and then calling it from the extension function. Support code snippets (usually structure deﬁnitions, helper functions and the like) are inserted into the extension module C/C++ ﬁle before the extension function code. Here is how to build the C version of the ﬁbonacci number generator:

def build_fibonacci(): """ Builds an extension module with fibonacci calculators. """ mod = ext_tools.ext_module(’fibonacci_ext’) a = 1 # this is effectively a type declaration # recursive fibonacci in C fib_code = """ int fib1(int a) { if(a <= 2) return 1; else return fib1(a-2) + fib1(a-1);

1.14. Weave (scipy.weave)

133

SciPy Reference Guide, Release 0.10.0rc1

} """ ext_code = """ int val = fib1(a); return_val = Py::new_reference_to(Py::Int(val)); """ fib = ext_tools.ext_function(’fib’,ext_code,[’a’]) fib.customize.add_support_code(fib_code) mod.add_function(fib) mod.compile()

XXX More about custom_info, and what xxx_info instances are good for. Note: recursion is not the fastest way to calculate ﬁbonacci numbers, but this approach serves nicely for this example.

**1.14.10 Customizing Type Conversions – Type Factories
**

not written

**1.14.11 Things I wish weave did
**

It is possible to get name clashes if you uses a variable name that is already deﬁned in a header automatically included (such as stdio.h) For instance, if you try to pass in a variable named stdout, you’ll get a cryptic error report due to the fact that stdio.h also deﬁnes the name. weave should probably try and handle this in some way. Other things...

134

Chapter 1. SciPy Tutorial

CHAPTER

TWO

**API - IMPORTING FROM SCIPY
**

In Python the distinction between what is the public API of a library and what are private implementation details is not always clear. Unlike in other languages like Java, it is possible in Python to access “private” function or objects. Occasionally this may be convenient, but be aware that if you do so your code may break without warning in future releases. Some widely understood rules for what is and isn’t public in Python are: • Methods / functions / classes and module attributes whose names begin with a leading underscore are private. • If a class name begins with a leading underscore none of its members are public, whether or not they begin with a leading underscore. • If a module name in a package begins with a leading underscore none of its members are public, whether or not they begin with a leading underscore. • If a module or package deﬁnes __all__ that authoritatively deﬁnes the public interface. • If a module or package doesn’t deﬁne __all__ then all names that don’t start with a leading underscore are public. Note: Reading the above guidelines one could draw the conclusion that every private module or object starts with an underscore. This is not the case; the presence of underscores do mark something as private, but the absence of underscores do not mark something as public. In Scipy there are modules whose names don’t start with an underscore, but that should be considered private. To clarify which modules these are we deﬁne below what the public API is for Scipy, and give some recommendations for how to import modules/functions/objects from Scipy.

**2.1 Guidelines for importing functions from Scipy
**

The scipy namespace itself only contains functions imported from numpy. These functions still exist for backwards compatibility, but should be imported from numpy directly. Everything in the namespaces of scipy submodules is public. In general, it is recommended to import functions from submodule namespaces. For example, the function curve_fit (deﬁned in scipy/optimize/minpack.py) should be imported like this:

from scipy import optimize result = optimize.curve_fit(...)

This form of importing submodules is preferred for all submodules except scipy.io (because io is also the name of a module in the Python stdlib):

135

SciPy Reference Guide, Release 0.10.0rc1

from scipy import interpolate from scipy import integrate import scipy.io as spio

In some cases, the public API is one level deeper. For example the scipy.sparse.linalg module is public, and the functions it contains are not available in the scipy.sparse namespace. Sometimes it may result in more easily understandable code if functions are imported from one level deeper. For example, in the following it is immediately clear that lomax is a distribution if the second form is chosen:

# first form from scipy import stats stats.lomax(...) # second form from scipy.stats import distributions distributions.lomax(...)

In that case the second form can be chosen, if it is documented in the next section that the submodule in question is public.

2.2 API deﬁnition

Every submodule listed below is public. That means that these submodules are unlikely to be renamed or changed in an incompatible way, and if that is necessary a deprecation warning will be raised for one Scipy release before the change is made. • scipy.cluster – vq – hierarchy • scipy.constants • scipy.fftpack • scipy.integrate • scipy.interpolate • scipy.io – arff – harwell_boeing – idl – matlab – netcdf – wavﬁle • scipy.linalg • scipy.maxentropy • scipy.misc • scipy.ndimage • scipy.odr 136 Chapter 2. API - importing from Scipy

SciPy Reference Guide, Release 0.10.0rc1

• scipy.optimize • scipy.signal • scipy.sparse – linalg • scipy.spatial – distance • scipy.special • scipy.stats – distributions – mstats • scipy.weave

2.2. API deﬁnition

137

SciPy Reference Guide, Release 0.10.0rc1

138

Chapter 2. API - importing from Scipy

CHAPTER

THREE

RELEASE NOTES

3.1 SciPy 0.10.0 Release Notes

Note: Scipy 0.10.0 is not released yet! Contents • SciPy 0.10.0 Release Notes – New features * Bento: new optional build system * Generalized and shift-invert eigenvalue problems in scipy.sparse.linalg * Discrete-Time Linear Systems (scipy.signal) * Enhancements to scipy.signal * Additional decomposition options (scipy.linalg) * Additional special matrices (scipy.linalg) * Enhancements to scipy.stats * Basic support for Harwell-Boeing ﬁle format for sparse matrices – Deprecated features * scipy.maxentropy * scipy.lib.blas * Numscons build system – Backwards-incompatible changes – Other changes – Authors SciPy 0.10.0 is the culmination of 8 months of hard work. It contains many new features, numerous bug-ﬁxes, improved test coverage and better documentation. There have been a limited number of deprecations and backwardsincompatible changes in this release, which are documented below. All users are encouraged to upgrade to this release, as there are a large number of bug-ﬁxes and optimizations. Moreover, our development attention will now shift to bugﬁx releases on the 0.10.x branch, and on adding new features on the development master branch. Release highlights: • Support for Bento as optional build system. • Support for generalized eigenvalue problems, and all shift-invert modes available in ARPACK. This release requires Python 2.4-2.7 or 3.1- and NumPy 1.5 or greater.

139

SciPy Reference Guide, Release 0.10.0rc1

3.1.1 New features

Bento: new optional build system Scipy can now be built with Bento. Bento has some nice features like parallel builds and partial rebuilds, that are not possible with the default build system (distutils). For usage instructions see BENTO_BUILD.txt in the scipy top-level directory. Currently Scipy has three build systems, distutils, numscons and bento. Numscons is deprecated and is planned and will likely be removed in the next release. Generalized and shift-invert eigenvalue problems in scipy.sparse.linalg The sparse eigenvalue problem solver functions scipy.sparse.eigs/eigh now support generalized eigenvalue problems, and all shift-invert modes available in ARPACK. Discrete-Time Linear Systems (scipy.signal) Support for simulating discrete-time linear systems, including scipy.signal.dlsim, scipy.signal.dimpulse, and scipy.signal.dstep, has been added to SciPy. Conversion of linear systems from continuous-time to discrete-time representations is also present via the scipy.signal.cont2discrete function. Enhancements to scipy.signal A Lomb-Scargle periodogram can now be computed with the new function scipy.signal.lombscargle. The forward-backward ﬁlter function scipy.signal.filtfilt can now ﬁlter the data in a given axis of an ndimensional numpy array. (Previously it only handled a 1-dimensional array.) Options have been added to allow more control over how the data is extended before ﬁltering. FIR ﬁlter design with scipy.signal.firwin2 now has options to create ﬁlters of type III (zero at zero and Nyquist frequencies) and IV (zero at zero frequency). Additional decomposition options (scipy.linalg) A sort keyword has been added to the Schur decomposition routine (scipy.linalg.schur) to allow the sorting of eigenvalues in the resultant Schur form. Additional special matrices (scipy.linalg) The functions hilbert and invhilbert were added to scipy.linalg. Enhancements to scipy.stats • The one-sided form of Fisher’s exact test is now also implemented in stats.fisher_exact. • The function stats.chi2_contingency for computing the chi-square test of independence of factors in a contingency table has been added, along with the related utility functions stats.contingency.margins and stats.contingency.expected_freq.

140

Chapter 3. Release Notes

SciPy Reference Guide, Release 0.10.0rc1

Basic support for Harwell-Boeing ﬁle format for sparse matrices Both read and write are support through a simple function-based API, as well as a more complete API to control number format. The functions may be found in scipy.sparse.io. The following features are supported: • Read and write sparse matrices in the CSC format • Only real, symmetric, assembled matrix are supported (RUA format)

3.1.2 Deprecated features

scipy.maxentropy The maxentropy module is unmaintained, rarely used and has not been functioning well for several releases. Therefore it has been deprecated for this release, and will be removed for scipy 0.11. Logistic regression in scikits.learn is a good alternative for this functionality. The scipy.maxentropy.logsumexp function has been moved to scipy.misc. scipy.lib.blas There are similar BLAS wrappers in scipy.linalg and scipy.lib. These have now been consolidated as scipy.linalg.blas, and scipy.lib.blas is deprecated. Numscons build system The numscons build system is being replaced by Bento, and will be removed in one of the next scipy releases.

**3.1.3 Backwards-incompatible changes
**

The deprecated name invnorm was removed from scipy.stats.distributions, this distribution is available as invgauss. The following deprecated nonlinear solvers from scipy.optimize have been removed:

‘‘broyden_modified‘‘ (bad performance) ‘‘broyden1_modified‘‘ (bad performance) ‘‘broyden_generalized‘‘ (equivalent to ‘‘anderson‘‘) ‘‘anderson2‘‘ (equivalent to ‘‘anderson‘‘) ‘‘broyden3‘‘ (obsoleted by new limited-memory broyden methods) ‘‘vackar‘‘ (renamed to ‘‘diagbroyden‘‘)

3.1.4 Other changes

scipy.constants has been updated with the CODATA 2010 constants. __all__ dicts have been added to all modules, which has cleaned up the namespaces (particularly useful for interactive work). An API section has been added to the documentation, giving recommended import guidelines and specifying which submodules are public and which aren’t.

3.1. SciPy 0.10.0 Release Notes

141

SciPy Reference Guide, Release 0.10.0rc1

3.1.5 Authors

This release contains work by the following people (contributed at least one patch to this release, names in alphabetical order): • Jeff Armstrong + • Matthew Brett • Lars Buitinck + • David Cournapeau • FI$H 2000 + • Michael McNeil Forbes + • Matty G + • Christoph Gohlke • Ralf Gommers • Yaroslav Halchenko • Charles Harris • Thouis (Ray) Jones + • Chris Jordan-Squire + • Robert Kern • Chris Lasher + • Wes McKinney + • Travis Oliphant • Fabian Pedregosa • Josef Perktold • Thomas Robitaille + • Pim Schellart + • Anthony Scopatz + • Skipper Seabold + • Fazlul Shahriar + • David Simcha + • Scott Sinclair + • Andrey Smirnov + • Collin RM Stocks + • Martin Teichmann + • Jake Vanderplas + • Gaël Varoquaux + • Pauli Virtanen • Stefan van der Walt

142

Chapter 3. Release Notes

9.9.4 . There have been a number of deprecations and API changes in this release.from which algorithms we implement. However. Whilst these pre-1. SciPy 0.x branch. Speciﬁcally. This is being done in an effort to make the package as coherent.0 Release Notes – Python 3 – Scipy source code location to be changed – New features * Delaunay tesselations (scipy.2.0 Release Notes 143 .sparse * scipy. To achieve this. The 1. as there are a large number of bugﬁxes and optimizations. we are aggressively reviewing and reﬁning the functionality. 3.interpolate) * Nonlinear equation solvers (scipy. This release requires Python 2. after which changing the package structure or API will be much more difﬁcult.2 SciPy 0. People with a “+” by their names contributed a patch for the ﬁrst time.0 is the culmination of 6 months of hard work. All users are encouraged to upgrade to this release.stats) – Deprecated features * Obsolete nonlinear solvers (in scipy. organization.0 is the ﬁrst SciPy release to support Python 3.2. we are committed to making them as bug-free as possible.optimize) – Removed features * Old correlate/convolve behavior (in scipy. It contains many new features. until the 1.signal) * scipy.1 . we need help from the community of users.weave.0 Release Notes Contents • SciPy 0. Please note that SciPy is still considered to have “Beta” status.sparse. our development attention will now shift to bug-ﬁx releases on the 0. improved test coverage and better documentation.spatial) * N-dimensional interpolation (scipy.0.7 or 3.9. we need feedback regarding all aspects of the project .0 release will mark a major milestone in the development of SciPy. The only module that is not yet ported is scipy. and useful as possible.optimize) * New linear algebra routines (scipy.SciPy Reference Guide. and on adding new features on the development trunk.9.arpack.9.0rc1 • Warren Weckesser • Mark Wiebe + A total of 35 people contributed to this release. which are documented below. intuitive.0 release.2.everything .1 Python 3 Scipy 0. Moreover.stats * scipy. and interface.and NumPy 1.9.10.linalg) * Improved FIR ﬁlter design functions (scipy.5 or greater.linalg. to details about our function’s call signatures.0 releases are considered to have “Beta” status.0.signal) * Improved statistical tests (scipy. Release 0. 3. numerous bug-ﬁxes.0 release.speigs – Other changes * ARPACK interface changes SciPy 0. 3. as we work toward a SciPy 1.

linearmixing) scipy. in addition to cubic spline (C1-smooth) interpolation in 2D and 1D. An object-oriented interface to each interpolator type is also available.spatial) Scipy now includes routines for computing Delaunay tesselations in N dimensions.kaiser_beta were added. The functions scipy. N-dimensional interpolation (scipy.optimize.0rc1 3.2.Delaunay interface.optimize.diagbroyden.optimize) Scipy includes new routines for large-scale nonlinear equation solving in scipy. The function scipy. Release 0. and some of the functions were deprecated (see above).linalg) Scipy now contains routines for (scipy. New linear algebra routines (scipy. This version includes a scipy. 144 Chapter 3.spatial.kaiser_atten and scipy.solve_triangular).optimize.optimize. The following methods are implemented: • Newton-Krylov (scipy. Such calculations can now make use of the new scipy. bandstop and multi-band FIR ﬁlters.10.interpolate.optimize.3 New features Delaunay tesselations (scipy. scipy. The development source code for Scipy can from then on be found at http://github. and move to Git.2 Scipy source code location to be changed Soon after this release. Scipy will stop using SVN as the version control system.interpolate) Support for scattered data interpolation is now signiﬁcantly improved. bandpass.nonlin module was completely rewritten.optimize.com/scipy/scipy 3.optimize.signal. effectively solving triangular equation systems Improved FIR ﬁlter design functions (scipy.signal.linalg.broyden2) methods (scipy. Release Notes .SciPy Reference Guide.optimize.signal) The function scipy. – Anderson method (scipy.firwin2 was added.griddata function that can perform linear and nearest-neighbour interpolation for N-dimensional scattered data.firwin was enhanced to allow the design of highpass.newton_krylov) • (Generalized) secant methods: – Limited-memory Broyden scipy. Nonlinear equation solvers (scipy.broyden1.anderson) • Simple iterations (scipy.signal.signal. This function uses the window method to create a linear phase FIR ﬁlter with an arbitrary frequency response.2.optimize. powered by the Qhull computational geometry library. The scipy.excitingmixing.

z.2.ndimage interpolation functions has been removed.0 and has now been removed. The function scipy.optimize) The following nonlinear solvers from scipy.stats.kendalltau was rewritten to make it much faster (O(n log(n)) vs O(n^2)). matvec. The same functionality is still available by specifying mode=’economic’.8.2.2. pdfapprox.stats.signal) The old behavior for scipy. corrcoef. ensure_sorted_indices. scipy.4 Deprecated features Obsolete nonlinear solvers (in scipy.signal. which is what most people expect.9. listprint. The current behavior is to never swap the inputs. matmat and rmatvec.0 Release Notes 145 .5 Removed features The deprecated modules helpmod. SciPy 0. The econ keyword in scipy.linalg. that provides Fisher’s exact test for 2x2 contingency tables. rowcol.7 were removed: save. samplestd.signal. stderr.convolve. var.misc.0rc1 Improved statistical tests (scipy. zs.signal.convolve2d. Convolve and correlate used to swap their arguments if the second argument has dimensions larger than the ﬁrst one. median. 3. 3. Release 0. pdf_moments and erfc. scipy.stats that are either available from numpy or have been superseded. The output_type keyword in many scipy.10.sparse which had been deprecated since version 0.signal. and have been deprecated since version 0.stats) A new function scipy.SciPy Reference Guide. These changes are mirrored in scipy.stats Many functions in scipy.mstats. scipy.optimize are deprecated: • broyden_modified (bad performance) • broyden1_modified (bad performance) • broyden_generalized (equivalent to anderson) • anderson2 (equivalent to anderson) • broyden3 (obsoleted by new limited-memory broyden methods) • vackar (renamed to diagbroyden) 3.7.correlate2d was deprecated in 0.qr has been removed. have been removed: std.sparse Several methods of the sparse matrix classes in scipy. scipy.stats.correlate and scipy. getdata. Old correlate/convolve behavior (in scipy. pexec and ppimport were removed from scipy. and the mode was relative to the input with the largest dimension. mean. and is how correlation is usually deﬁned.fisher_exact was added. samplevar. cov.

err: partially_converged_w = err. 6) except ArpackNoConvergence.10. spidentity.6 Other changes ARPACK interface changes The interface to the ARPACK eigenvalue routines in scipy. 30) try: w. scipy.random.2.sparse.SciPy Reference Guide.0 Release Notes 146 Chapter 3.sparse.sparse. The routines were moreover renamed as follows: • eigen –> eigs • eigen_symmetric –> eigsh • svd –> svds 3. The colind and rowind attributes were removed from CSR and CSC matrices respectively. The dims and nzmax keywords were removed from the sparse matrix constructor.sparse.speigs A duplicated interface to the ARPACK library was removed. Release 0.eigenvalues partially_converged_v = err.identity.8.linalg.sparse.sparse.sparse. they can be accessed as follows: import numpy as np from scipy. The eigenvalue and SVD routines now raise ArpackNoConvergence if the eigenvalue iteration fails to converge. scipy.eigenvectors Several bugs were also ﬁxed. 3.linalg import eigs.3 SciPy 0. lil_eye and lil_diags were removed from scipy. speye.randn(30.linalg was changed for more robustness.arpack. ArpackNoConvergence m = np.0rc1 The functions spkron. If partially converged results are desired. The ﬁrst three functions are still available as scipy. Release Notes .eye and scipy.kron. v = eigs(m.

1 or greater. This is being done in an effort to make the package as coherent.constants.find * Incomplete sparse LU decompositions * Faster matlab ﬁle reader and default behavior change * Faster evaluation of orthogonal polynomials * Lambert W function * Improved hypergeometric 2F1 function * More ﬂexible interface for Radial basis function interpolation – Removed features * scipy. will very likely include experimental support for Python 3.0 is the culmination of 17 months of hard work.0 releases are considered to have “Beta” status. organization.everything .signal) * New functions and other changes in scipy. All users are encouraged to upgrade to this release.x branch. and on adding new features on the development trunk.8. 3.linalg * New function and changes in scipy. which are documented below.fftpack) * Correlation functions now implement the usual deﬁnition (scipy. It contains many new features.fftpack) * Single precision support for fft functions (scipy. we are committed to making them as bug-free as possible. we need help from the community of users.io SciPy 0. we need feedback regarding all aspects of the project .10. Please note that SciPy is still considered to have “Beta” status. Release 0. planned for fall 2010.8.0 release will mark a major milestone in the development of SciPy.signal) * Additions and modiﬁcation to LTI functions (scipy. to details about our function’s call signatures. and useful as possible. To achieve this. This release requires Python 2. as we work toward a SciPy 1. SciPy 0.signal) * Obsolete code deprecated (scipy. Moreover. since the Python 3 compatible Numpy 1. until the 1.0. numerous bug-ﬁxes.2. There have been a number of deprecations and API changes in this release.optimize * New sparse least squares solver * ARPACK-based sparse SVD * Alternative behavior available for scipy.8.0.misc) * Additional deprecations – New features * DCT support (scipy. SciPy 0.1 Python 3 Python 3 compatibility is planned and is currently technically feasible.from which algorithms we implement. we are aggressively reviewing and reﬁning the functionality. However. intuitive.SciPy Reference Guide.4 .8. after which changing the package structure or API will be much more difﬁcult.0 release.4.5 has not been released yet. since Numpy has been ported.0 Release Notes – Python 3 – Major documentation improvements – Deprecated features * Swapping inputs for correlation functions (scipy.3.9. Whilst these pre-1.signal) * Improved waveform generators (scipy.8. The 1. our development attention will now shift to bug-ﬁx releases on the 0.6 and NumPy 1. as there are a large number of bug-ﬁxes and optimizations. support for Python 3 in Scipy is not yet included in Scipy 0.0rc1 Contents • SciPy 0.0 Release Notes 147 . improved test coverage and better documentation.0 release. However. and interface. 3.3. Speciﬁcally.

SciPy Reference Guide. the keyword argument ‘disp’ was added to the function.fftpack) New realtransforms have been added. • The io. Single precision support for fft functions (scipy.signal) Concern correlate.2 Major documentation improvements SciPy documentation is greatly improved.sweep_poly instead.fftpack) fft functions can now handle single precision inputs as well: fft(x) will return a single precision array if x is single precision. with the default value ‘True’.0rc1 3. type I. If the second input is larger than the ﬁrst input. the return value will be just the solution. In Scipy version 0. In 0.3. namely dct and idct for Discrete Cosine Transform.3.3 Deprecated features Swapping inputs for correlation functions (scipy.0. the inputs are swapped before calling the underlying computation routine. and do what most people expect if the old_behavior=False argument is passed: • correlate. Additional deprecations • linalg: The function solveh_banded currently returns a tuple containing the Cholesky factorization and the solution to the linear system.3.9. Use the function signal.chirp is deprecated. and will be removed in scipy 0.0. ppimport and pexec from scipy. the transform is computed internally in double precision to avoid rounding error in FFTPACK. • Passing the coefﬁcients of a polynomial as the argument f0 to signal.9.0.10. convolve and convolve2d. for FFT sizes that are not composites of 2. correlate2d. convolve and their 2d counterparts do not swap their inputs depending on their relative shape anymore.9.0.misc are deprecated. In SciPy 0. II and III are available.4 New features DCT support (scipy. They will be removed from SciPy in version 0. • The function constants. Correlation functions now implement the usual deﬁnition (scipy. 3.codata. This behavior is deprecated.chirp is deprecated. At the moment. 148 Chapter 3. and 5.9. Obsolete code deprecated (scipy. • The qshape keyword argument of signal.recaster module has been deprecated and will be removed in 0. Release Notes .8. Use the argument vertex_zero instead. Release 0.ﬁnd will generate a DeprecationWarning. 3. the default will be ‘False’. 3.9.misc) The modules helpmod.signal) The outputs should now correspond to their matlab and R counterparts.

signal. Additions and modiﬁcation to LTI functions (scipy.0 Release Notes 149 .sparse.optimize The curve_ﬁt function has been added. They use the function scipy. along with the usual 2D arguments. 3. Improved waveform generators (scipy.linalg The functions cho_solve_banded. sweep_poly.arpack.org/wiki/Chirp.linalg. “hyperbolic”.signal. it now generates a waveform that is also known as an “exponential” or “geometric” chirp. hadamard and leslie were added to scipy. The leastsq and fsolve functions now return an array of size one instead of a scalar when solving for a single parameter. was added. (See http://en. a boolean.eigen. A new function. sweep_poly. The function block_diag was enhanced to accept scalar and 1D arguments. New function and changes in scipy. • chirp no longer handles an arbitrary polynomial. A>.0rc1 • correlation functions now conjugate their second argument while computing the slided sum-products. SciPy 0. ARPACK-based sparse SVD A naive implementation of SVD for sparse matrices is available in scipy. which correspond to the usual deﬁnition of correlation. it takes a function and uses non-linear least squares to ﬁt that to the provided data. • The function scipy.8. respectively. This functionality has been moved to a new function. It is based on using an symmetric solver on <A. New functions and other changes in scipy. This routine ﬁnds a least-squares solution to a large.SciPy Reference Guide. circulant. New sparse least squares solver The lsqr function was added to scipy.lsim2 to compute the impulse and step response of a system.3. chirp now uses the keyword vertex_zero.10.sparse.wikipedia.linalg.signal) • The functions impulse2 and step2 were added to scipy.signal were made: • The waveform generated when method=”logarithmic” was corrected. linear system of equations. Release 0.signal) Several improvements to the chirp function in scipy.signal.lsim2 was changed to pass any additional keyword arguments to the ODE solver. and as such may not be very precise. sparse. was added.) • A new chirp method. companion. • Instead of the keyword qshape.

More ﬂexible interface for Radial basis function interpolation The scipy. There is a FutureWarning when calling the writer to warn of this change. but it can’t write them.Rbf class now accepts a callable as input for the “function” argument.special: eval_legendre. the default will be reversed. we return matlab structs as numpy structured arrays. Improved hypergeometric 2F1 function Implementation of scipy. eval_sh_legendre. The reader reads matlab named and anonymous functions.7. the behavior is the same as in Scipy version 0. for now we suggest using the oned_as=’row’ keyword argument to scipy. When disp is True. which was previously the only available way. eval_jacobi. where the objects have attributes named for the struct ﬁelds. Until scipy 0.8.linalg. Note that the previous orthogonal polynomial routines will now also invoke this feature. The new version should produce accurate values for all real parameters.9. eval_chebys. You can get the older behavior by using the optional struct_as_record=False keyword argument to scipy. As of 0. when possible.special. the function returns the list of keys instead of printing them.io. eval_hermite.0. which supports incomplete sparse LU decompositions.lambertw can now be used for evaluating the Lambert W function.sparse. (In SciPy version 0. eval_chebyt. so both write row vectors. eval_laguerre. eval_sh_chebyt. 150 Chapter 3. eval_hermitenorm. eval_gegenbauer. We will change this in the next version. When False. Lambert W function scipy. eval_chebyc.0 we have returned arrays of matlab structs as numpy object arrays. in addition to the built-in radial basis functions which can be selected with a string argument. Upgrade to SuperLU 4.io.find The keyword argument disp was added to the function scipy.special.savemat and friends.hyp2f1 for real parameters was revised. There is an inconsistency in the matlab ﬁle writer.spilu.SciPy Reference Guide.0rc1 Alternative behavior available for scipy. Release 0.find. This is faster than constructing the full coefﬁcient representation of the polynomials. eval_sh_jacobi. Release Notes . eval_chebyu.0. These can be accessed via scipy. eval_genlaguerre.) Incomplete sparse LU decompositions Scipy now wraps SuperLU version 4.0 also ﬁxes some known bugs. Faster matlab ﬁle reader and default behavior change We’ve rewritten the matlab ﬁle reader in Cython and it should now read matlab ﬁles at around the same speed that Matlab does. in that it writes numpy 1D arrays as column vectors in matlab 5 ﬁles.interpolate.8. and row vectors in matlab 4 ﬁles.loadmat and friends.constants.10. eval_sh_chebyu. with the default value True. Faster evaluation of orthogonal polynomials Values of orthogonal polynomials can be evaluated with new vectorized functions in scipy.constants.

3. The only change is that all C sources from Cython code have been regenerated with Cython 0.1 is a bug-ﬁx release with no new features compared to 0.1 Release Notes Contents • SciPy 0. load.7.7. etc.stsci: the package was removed The module scipy. 3. save. objload.4 SciPy 0.).1 Release Notes – scipy.0rc1 3. Bugs ﬁxed: • Several ﬁxes in Matlab ﬁle IO Bugs ﬁxed: • Work around a failure with Python 2. read_array. since basic array reading and writing capability is now handled by NumPy. create_module.misc. NumPy will be where basic code for reading and writing NumPy arrays is located. write_array.special – scipy. as well as support for array object Bugs ﬁxed: 3.7.SciPy Reference Guide.1. This ﬁxes the incompatibility between binaries of SciPy 0. unpackbits.2 Release Notes Contents • SciPy 0. fopen. bswap.10.2 Release Notes 151 .2 is a bug-ﬁx release with no new features compared to 0.7. create_shelf. video. Release 0.6 – Universal build for scipy SciPy 0.5 SciPy 0. Others have been moved from SciPy to NumPy.1 and NumPy 1. memory-mapping capabilities. audio. fread. Several functions in scipy. Some of these functions have been replaced by NumPy’s raw reading and writing capabilities. SciPy 0.7.7.odr – scipy. fwrite.7.7. matlab.signal – scipy. packbits.io – scipy.0 release including: npﬁle. objsave.limits was removed.sparse – scipy. while SciPy will house ﬁle readers and writers for various data formats (data.12.io are removed in the 0.3. images. or array methods.0.2 Release Notes SciPy 0.6 Memory leak in lﬁlter have been ﬁxed.1.7. and convert_objectarray.stats – Windows binaries for python 2.8.5 Removed features scipy.7.4. The IO code in both NumPy and SciPy is being extensively reworked.4.

yv. • linregress.special.org/gmane.io.comp.mmread with scipy. Release 0. the python 2.python. and and the one for python 2. norm. The binary for python 2. return nan for out-of-domain instead of returning only the real part of the result. Also. complex x • #854: jv. describe: errors ﬁxed • kstwobign. kv: return nan more consistently when out-of-domain • #927: ellipj: ﬁx segfault on Windows • #946: ellpj: ﬁx segfault on Mac OS X/python 2.scientiﬁc. frechet. exponpow. 3.5 version of scipy requires numpy 1. 152 Chapter 3. • ive. also in jnp/yn/ynp_zeros • #853: jv. #640: iv: problems at large arguments ﬁxed by new implementation • #623: jv: ﬁx errors at large arguments • #679: struve: ﬁx wrong output for v < 0 • #803: pbdv produces invalid output • #804: lqmn: ﬁx crashes on some input • #823: betainc: ﬁx documentation • #834: exp1 strange behavior near negative integer values • #852: jn_zeros: more accurate results for large s.3.1 Windows binaries for python 2.sparse. The python 2.user/19996 Several bugs of varying severity were ﬁxed in the special functions: • #503. yv.5 requires numpy 1. http://thread. yve.errprint(1) has been enabled. when scipy.5.6 requires numpy 1.10.0 or above.2. kv.0 or above.6 binaries for windows are now included. planck: improvements to numerical accuracy in distributions 3.lil_matrix broken • lil_matrix and csc_matrix reject now unexpected sequences.6 python 2. iv: invalid results for non-integer v < 0. truncexpon. genexpon. #925: lﬁlter ﬁxes • #871: bicgstab fails on Win32 Bugs ﬁxed: • #883: scipy. jve. exponweib.6 version requires numpy 1. and does not depend on gfortran anymore (libgfortran is statically linked). rdist.6 combination. expon.2 Universal build for scipy Mac OS X binary installer is now a proper universal build.2.3.0rc1 • #880. mannwhitneyu. warning messages are now issued as Python warnings instead of printing them to stderr.0 or above.gmane. Release Notes . cf. kve: with real-valued input.SciPy Reference Guide. iv.0 or above.5.

0 Release Notes Contents • SciPy 0. we need feedback regarding all aspects of the project .6 SciPy 0. after which changing the package structure or API will be much more difﬁcult. Whilst these pre-1. to details about our function’s call signatures.6.0 Release Notes – Python 2.0 release will mark a major milestone in the development of SciPy. we are aggressively reviewing and reﬁning the functionality. improved testing infrastructure. However. intuitive. The upcoming 3. and useful as possible.7. Over the next year.10.5 and NumPy 1. and numerous infrastructure improvements to lower the barrier to contributions (e.everything . All users are encouraged to upgrade to this release.1 Python 2. Speciﬁcally.0 release. we need help from the community of users. numerous bug-ﬁxes.0. we are committed to making them as bug-free as possible. there are problems related to the compilation process.0.0 A signiﬁcant amount of work has gone into making SciPy compatible with Python 2. There have been a number of deprecations and API changes in this release. The main issue with 2.1 mostly works.SciPy Reference Guide.x branch. until the 1. NumPy 1.0 releases are considered to have “Beta” status. The 1. For example. more explicit coding standards. as we work toward a SciPy 1. To achieve this.6 and 3.0 is the culmination of 16 months of hard work.6. 3. however. organization.2 or greater.4 or 2. and on adding new features on the development trunk. Release 0. On Windows.6. Please note that SciPy is still considered to have “Beta” status. which are documented below.. On UNIX (including Mac OS X).from which algorithms we implement. better documentation tools). with a few caveats.7. This is being done in an effort to make the package as coherent. we hope to see this trend continue and invite everyone to become more involved. Over the last year. Moreover. as there are a large number of bug-ﬁxes and optimizations. SciPy 0.7. improved test coverage and better documentation.0 release. It contains many new features. in addition to ﬁxing numerous bugs in this release. there are still some issues in this regard. and interface.g.0 Release Notes 153 . our development attention will now shift to bug-ﬁx releases on the 0. we have seen a rapid increase in community involvement. we have also doubled the number of unit tests since the last release.6 support is NumPy.2.7.0rc1 3. This release requires Python 2.7.6 and 3.0 – Major documentation improvements – Running Tests – Building SciPy – Sandbox Removed – Sparse Matrices – Statistics package – Reworking of IO package – New Hierarchical Clustering module – New Spatial package – Reworked fftpack package – New Constants package – New Radial Basis Function module – New complex ODE integrator – New generalized symmetric and hermitian eigenvalue problem solver – Bug ﬁxes in the interpolation package – Weave clean up – Known problems SciPy 0.

6. 3. we don’t have any timeline or roadmap for this transition. Starting with this release. more effort is needed on the documentation front.6. 3. such as autoconf. please see The NumPy/SciPy Testing Guide. as the functionality had been replaced by other code.which has all ready paid off. 154 Chapter 3.org/ and correct the issues. We have also greatly improved our test coverage. There were just over 2. which hadn’t been available since SciPy was ported to NumPy in 2005.6 support for SciPy 0. Taking advantage of the new testing framework requires nose version 0. since a lot of C code has to be ported. the tutorial shows how to use several essential parts of Scipy. given the rapid increase in tests. Scons is written in Python and its conﬁguration ﬁles are Python scripts. please register a user name in our web-based documentation editor at http://docs.5 Sandbox Removed While porting SciPy to NumPy in 2005. 3. The sandbox has served its purpose well. more tools.0 release.scipy. it requires NumPy to be ported to Python 3.2 Major documentation improvements SciPy documentation is greatly improved. Thus scipy. 3. Release 0. The sandbox was a staging ground for packages that were undergoing rapid development and whose APIs were in ﬂux. but was starting to create confusion. with just over 4. improved fortran support. One major advantage of the new framework is that it greatly simpliﬁes writing unit tests .10. SciPy now uses the new NumPy test framework as well.000 unit tests. and want to help us out. NumScons is meant to replace NumPy’s custom version of distutils providing more advanced functionality. It was also a place where broken code could live.6. this release nearly doubles that number.7 will be addressed in a bug-ﬁx release.6. This release also includes an updated tutorial. you can view a HTML reference manual online or download it as a PDF ﬁle. and support for numpy.000 unit tests in the 0. contributing to Scipy documentation is now easier than before: if you ﬁnd that a part of it requires improvements. SCons is a next-generation build system. This requires immense effort. several packages and modules were moved into scipy. It also includes the ndimage documentation from the numarray manual. Though not comprehensive.SciPy Reference Guide. and the remaining code was just deleted. To run the full test suite: >>> import scipy >>> scipy. Luckily.0 is not supported at all. Release Notes .0 is still under consideration. Most of the code was moved into scipy. currently. some code was made into a scikit.6.0.4 Building SciPy Support for NumScons has been added.0rc1 NumPy 1. Nevertheless. using SCons at its core. Any remaining issues with 2.distutils/scons cooperation.sandbox.3 release will ﬁx these problems.2 introduced a new testing framework based on nose. NumScons is a tentative new build system for NumPy/SciPy. Python 3.10.sandbox was removed. intended to replace the venerable Make with the integrated functionality of autoconf/automake and ccache.3 Running Tests NumPy 1.test(’full’) For more information. The new reference guide was built using the popular Sphinx tool. The transition to 3. or later.

Release 0. and are accessible through scipy. e.7 Statistics package Statistical functions for masked arrays have been added. SciPy 0.g.mstats.asformat(’csr’) • using constructors A = lil_matrix([[1.tocsr() and . 3. the generic implementation of the maximum likelihood method might not be the numerically appropriate estimation method.vstack : sparse version of numpy.bmat • sparse. A[1:3.. does not work out-of-the-box for some distributions . of those.3) ) and B = lil_matrix( [[1. and corrcoef.2]]). starting values have to be carefully chosen.hstack : sparse version of numpy. var. Added deprecation warning for mean.2. 3. std. It now agrees with the MATLAB(TM) function of the same name. Several bugs were ﬁxed for statistical functions.find : nonzero values and their indices csr_matrix and csc_matrix now support slicing and fancy indexing (e. uint32. For example: • A = csr_matrix( rand(3.6.6 Sparse Matrices Sparse matrices have seen extensive improvements. The functions are similar to their counterparts in scipy.[3.g. Note. Finally.0 Release Notes 155 .stats and numpy versions of these functions. median. Two new sparse formats were added: • new class dia_matrix : the sparse DIAgonal format • new class bsr_matrix : the Block CSR format Several new sparse matrix construction functions were added: • sparse.in some cases. cov. These functions should be replaced by their numpy counterparts. that some of the default options differ between the scipy. in other cases.0rc1 3. etc.6. Numerous efﬁciency improvements to format conversions and sparse matrix arithmetic have been made.10. However.6. kstest and percentileofscore gained new keyword arguments.4]] ) The handling of diagonals in the spdiags function has been changed. Numerous bug ﬁxes to stats.:]).SciPy Reference Guide. kurtosis) and entropy. The maximum likelihood estimator.tril : extract lower triangle • sparse.2].asformat() member function. Conversions among all sparse formats are now possible: • using member functions such as . B = csr_matrix(A) All sparse constructors now accept dense matrices and lists of lists. There is now support for integer dtypes such int8. fit.vstack • sparse. a few issues remain with higher moments (skew.bmat : sparse version of numpy. several methods in individual distributions were corrected. however.7.hstack Extraction of submatrices and nonzero values have been added: • sparse.8].distributions: all generic methods now work correctly.stats. this release contains numerous bugﬁxes.kron : sparse Kronecker product • sparse.tolil() • using the .triu : extract upper triangle • sparse.6.stats but they have not yet been veriﬁed for identical interfaces and algorithms. 4:7] and A[[3. A.

io have been deprecated and will be removed in the 0. since basic array reading and writing capability is now handled by NumPy. create_shelf. while SciPy will house ﬁle readers and writers for various data formats (data. 3. It also includes a distance module. Euclidean. and kd-trees.8 Reworking of IO package The IO code in both NumPy and SciPy is being extensively reworked. complete. several functions are provided for computing inconsistency statistics. and objects • v5 readers/writers for function handles and 64-bit integers • new struct_as_record keyword argument to loadmat. etc. write_array. Some of these functions have been replaced by NumPy’s raw reading and writing capabilities. and ﬁnds the root of each tree in the forest.6. Cosine. fopen. Release Notes . cell arrays.). but that supports annotation and a variety of other algorithms. Release 0.6. The API for both modules may change somewhat. squeeze_me=False by default 3. as user requirements become clearer.cluster package.10. Chebyshev. 156 Chapter 3. Canberra. It includes rapidly compiled code for computing exact and approximate nearest neighbors. unpackbits. images.. Distance and dissimilarity functions provided include Bray-Curtis. and convert_objectarray. containing a collection of distance and dissimilarity functions for computing distances between vectors. fwrite. increases in numerical precision and enhancements in the next release of scipy. cophenetic distance. clustering. Several functions in scipy. NumPy will be where basic code for reading and writing NumPy arrays is located. a dendrogram function plots hierarchical clusterings as a dendrogram. objsave. matlab. to_tree converts a matrix-encoded hierarchical clustering to a ClusterNode object. memory-mapping capabilities. i. weighted. The ClusterNode class represents a hierarchical clusterings as a ﬁeld-navigable tree object. City Block. average. fread. bswap.. Hamming.SciPy Reference Guide. In addition. as well as a pure-python kd-tree with the same interface. 3.6. Linkage methods implemented include single. centroid. load. Others have been moved from SciPy to NumPy. video. or array methods. and maximum distance between descendants. create_module. Routines for converting between MATLAB and SciPy linkage encodings are provided. using matplotlib. and ward. objload.e. The fcluster and fclusterdata functions transform a hierarchical clustering into a set of ﬂat clusters. save. median.9 New Hierarchical Clustering module This module adds new hierarchical clustering functionality to the scipy. which loads struct arrays in matlab as record arrays in numpy • string arrays have dtype=’U.0 release including npfile. Since these ﬂat clusters are generated by cutting the tree into a forest of trees. The Matlab (TM) ﬁle readers/writers have a number of improvements: • default version 5 • v5 writers for structures. The function interfaces are similar to the functions provided MATLAB(TM)’s Statistics Toolbox to help facilitate easier migration to the NumPy/SciPy framework. the leaders function takes a linkage and a ﬂat clustering.’ instead of dtype=object • loadmat no longer squeezes singleton dimensions. read_array. which is useful for spatial statistics. packbits.8.10 New Spatial package The new spatial package contains a collection of spatial algorithms and data structures. Finally.0rc1 We expect more bugﬁxes. useful for spatial statistics and clustering applications. audio. Dice.

constants provides a collection of physical constants and conversion factors. Minkowski. Release 0.nist. 3. 3. unless otherwise stated. Sokal-Michener. Radial basis functions can be used for smoothing/interpolating scattered data in n-dimensions. Only (NETLIB) fftpack remains. Byrne). Squared Euclidean. the units.6. 3. and Yule.interpolate. select a range of eigenvalues only. Users of scipy.weave. 3.15 New generalized symmetric and hermitian eigenvalue problem solver scipy.6. but should be used with caution for extrapolation outside of the observed data range.0rc1 Jaccard. Kulsinski. 3. Moreover. only the upper triangular is stored. and choose to use a faster algorithm at the expense of increased memory usage.like ﬂoat32 support .6. FFTW3. Pairwise distance matrices are stored in condensed form. The values are stored in the dictionary physical_constants as a tuple containing the value. MKL and DJBFFT wrappers have been removed. They may be found at physics. wx_spec. squareform converts distance matrices between square and condensed forms. Mahalanobis.6. These constants are taken from CODATA Recommended Values of the Fundamental Physical Constants: 2002.6.interp1d used to be incorrect. Standardized Euclidean. Several helper functions are provided.ode now contains a wrapper for the ZVODE complex-valued ordinary differential equation solver (by Peter N.17 Weave clean up There were numerous improvements to scipy.eigh changed accordingly. blitz++ was relicensed by the author to be compatible with the SciPy license. and George D. 3.interp1d may need to revise their code if it relies on the previous behavior.linalg.integrate.py was removed. Sokal-Sneath.14 New complex ODE integrator scipy. All constants are in SI units.6.SciPy Reference Guide.10.7. we hope to add new features . The pdist function computes pairwise distance between all unordered pairs of vectors in a set of vectors.0 Release Notes 157 .11 Reworked fftpack package FFTW2.interpolate. SciPy 0.more easily.13 New Radial Basis Function module scipy. interp1d returns now a scalar (0D-array) if the input is a scalar. Users can now solve generalized problems. 3. Hindmarsh. if interpolated data had more than 2 dimensions and the axis keyword was set to a non-default value.eigh now contains wrappers for more LAPACK symmetric and hermitian eigenvalue problem solvers. and the relative precision . By focusing on one backend. Brown. The cdist computes the distance on all pairs of vectors in the Cartesian product of two sets of vectors. This has been ﬁxed.6. Russell-Rao. Alan C.interpolate now contains a Radial Basis Function module.16 Bug ﬁxes in the interpolation package The shape of return values from scipy.gov/constants.6.in that order. The signature of the scipy. Rogers-Tanimoto. 3. Matching.12 New Constants package scipy.linalg.

Release Notes . You can make the change in the installed scipy (in site-packages).6.SciPy Reference Guide.h (line 27). 158 Chapter 3.7.0rc1 3.0: • weave test failures on windows: those are known. • weave test failure with gcc 4. A workaround is to add #include <cstdlib> in scipy/weave/blitz/blitz/funcs.3 bug. and are being revised.3 (std::labs): this is a gcc 4.10. Release 0.18 Known problems Here are known problems with scipy 0.

The vq module only supports vector quantization and the k-means algorithms. iter.. 2..]. and visualizing clusters with dendrograms. f2 1. Parameters obs : ndarray Each row of the array is an observation. 4. Its features include generating hierarchical clusters from distance matrices. minit. thresh]) kmeans2(data. k_or_guess[. Assign codes from a code book to observations.cluster) scipy. and quantizing vectors by comparing them with centroids in a code book. compression.vq) and vector quantization Provides routines for k-means clustering. [ f0 1.CHAPTER FOUR REFERENCE 4. Each feature is divided by its standard deviation across all observations to give it unit variance. whiten(obs) vq(obs...cluster. missing]) Normalize a group of observations on a per feature basis.1 Clustering package (scipy. target detection. generating code books from k-means models.cluster. #o0 #o1 159 .cluster. f1 1. it is beneﬁcial to rescale each feature dimension of the observation set with whitening.vq. calculating statistics on clusters. and other areas. 2.cluster.]. scipy.. code_book) kmeans(obs. >>> # >>> obs = [[ . The columns are the features seen during each observation. Performs k-means on a set of observation vectors forming k clusters. communications. 2.whiten(obs) Normalize a group of observations on a per feature basis. computing distance matrices from observation vectors.vq Clustering algorithms are useful in information theory. thresh..2 K-means clustering (scipy. cutting linkages to generate ﬂat clusters. k[. Before running k-means. Classify a set of observations into k clusters using the k-means algorithm. iter.hierarchy The hierarchy module provides functions for hierarchical and agglomerative clustering. scipy.

]]) #c2 Returns code : ndarray A length N array holding the code book index for each observation.....]. 1. [ 2. Assigns a code from a code book to each observation.. [ f0 1. which can be acheived by passing them through the whiten function.3.. [ [ 3. The features in obs should have unit variance. Examples >>> from numpy import array >>> from scipy..cluster. 1.57469577.5. . Parameters obs : ndarray Each row of the ‘N’ x ‘M’ array is an observation. .7...SciPy Reference Guide. [ 0.]]) #o3 Returns result : ndarray Contains the values in obs scaled by the standard devation of each column. 4.. Release 0.cluster..8.88897275].0. #c1 4. f3 4.20300046... Notes This currently forces 32-bit math precision for speed. 7.41250074. 4. 5. The columns are the “features” seen during each observation..10.62102355]... Anyone know of a situation where this undermines the accuracy of the algorithm? 160 Chapter 4. [ ..69407953. code_book : ndarray The code book is usually generated using the k-means algorithm. >>> # >>> code_book = [ .vq import whiten >>> features = array([[ 1. and the columns are the features of the code.0rc1 . [ 1.88897275]]) scipy. [ .7].9..6. 3. #c0 4.].2. 3..2]. 3. 2. The features must be whitened ﬁrst using the whiten function or something equivalent.].2.1.1.2. f2 3.]]) >>> whiten(features) array([[ 3. 2.. 2. dist : ndarray The distortion (distance) between the observation and its nearest code..vq. . [ 1..5.39456571. 2.. Each row of the array holds a different code.vq(obs. #o2 4. 5... f1 2. Each observation vector in the ‘M’ by ‘N’ obs array is compared with the centroids in the code book and assigned the code of the closest centroid. The code book can be created with the k-means algorithm or a different encoding algorithm.43684242.. code_book) Assign codes from a code book to observations. Reference .. 3. 0.

See Also: 4. This yields a code book mapping centroids to codes and vice versa. optional The number of times to run k-means.. This parameter does not represent the number of iterations of the k-means algorithm.].SciPy Reference Guide.vq import vq >>> code_book = array([[1.2].10. the change in distortion since the last iteration is less than some threshold.2..e. Returns codebook : ndarray A k by N array of k centroids.’i’). [ 0.. k_or_guess : int or ndarray The number of centroids to generate. 0.3.2. .73484692. .. iter : int.vq) 161 . This argument is ignored if initial centroids are speciﬁed with an array for the k_or_guess parameter. optional Terminates the k-means algorithm if the change in distortion since the last k-means iteration is less than or equal to thresh. passing a k by N array speciﬁes the initial k centroids. iter=20.. The columns are the features seen during each observation. The initial k centroids are chosen by randomly selecting observations from the observation matrix.8. Alternatively.9.code_book) (array([1. not necessarily the globally minimal distortion.2.6. Release 0. 1.1. K-means clustering and vector quantization (scipy. 0. distortion : ﬂoat The distortion between the observations passed and the centroids generated.0000000000000001e-05) Performs k-means on a set of observation vectors forming k clusters...0rc1 Examples >>> from numpy import array >>> from scipy.0. A code is assigned to each centroid. i.]]) >>> features = array([[ 1..kmeans(obs.cluster.43588989.2. array([ 0.7]. k_or_guess. 0]. Parameters obs : ndarray Each row of the M by N array is an observation vector.5. The centroids and codes generated represent the lowest distortion seen. thresh : ﬂoat. The features must be whitened ﬁrst with the whiten function.1. [2.7]]) >>> vq(features. returning the codebook with the lowest distortion. The k-means algorithm adjusts the centroids until sufﬁcient progress cannot be made..2. thresh=1. [ 1. which is also the row index of the centroid in the code_book matrix generated.1.83066239])) scipy. . Distortion is deﬁned as the sum of the squared differences between the observations and the corresponding centroid.1.2..5.cluster. The i’th centroid codebook[i] is represented with the code i.vq.cluster.

3].cluster. . whiten must be called prior to passing an observation matrix to kmeans.1.0. 0.2. Examples >>> from numpy import array >>> from scipy. [ 0. 1.2.2. iter=10. it is interpreted as initial cluster to use instead.1.2000)) >>> codes = 3 >>> kmeans(whitened. Reference .0. iter : int Number of iterations of the k-means algrithm to run.. [ 0.. Note that this differs in meaning from the iters parameter to the kmeans function.40782893. . 2..85684700941625547) >>> from numpy import random >>> random.4.vq import vq. thresh : ﬂoat (not used yet) 162 Chapter 4. whiten >>> features = array([[ 1.3. k : int or ndarray The number of clusters to form as well as the number of centroids to generate. [ 0.86287398]. Release 0. or if a ndarray is given instead...0.cluster..5196582527686241) scipy. missing=’warn’) Classify a set of observations into k clusters using the k-means algorithm.24398691]]).. 2.0. . Several initialization methods are included.1].8].9.book) (array([[ 2.kmeans2(data. k. [ 0. kmeans. [ 1. [ 2.5].5]. minit=’random’.0rc1 kmeans2 a different implementation of k-means clustering with more methods for generating initial centroids but without using a distortion change threshold as a stopping criterion.1.vq.0000000000000001e-05. .1. [ 0. 0.65607529]..10. [ 1. 0.1. [ 0.5].SciPy Reference Guide. .3110306 .8]..93218041.6]. The algorithm attempts to minimize the Euclidian distance between observations and centroids. [ 1.86287398].0]]) >>> whitened = whiten(features) >>> book = array((whitened[0].0... If minit initialization string is ‘matrix’. . Parameters data : ndarray A ‘M’ by ‘N’ array of ‘M’ observations in ‘N’ dimensions or a length ‘M’ array of ‘M’ one-dimensional observations...codes) (array([[ 2.. 2.32544402.whitened[2])) >>> kmeans(whitened. thresh=1. ..02786907]]).5. .8.3110306 . [ 0..seed((1000.

A vector v belongs to cluster i if it is closer to centroid i than any other centroids.2. Available methods are ‘random’. The codebook is a k by N array where the i’th row is the centroid of code word i. The observation vectors and centroids have the same feature dimension. 4. ‘points’: choose k observations (rows) at random from data for the initial centroids. Running k-means with k=256 generates a code book of 256 codes. The change in distortion is used as a stopping criterion: when the change is lower than a threshold.1 Background information The k-means algorithm takes as input the number of clusters to generate. Returns centroid : ndarray A ‘k’ by ‘N’ array of centroids found at the last iteration of k-means. can be used to quantize vectors. K-means clustering and vector quantization (scipy. Quantization aims to ﬁnd an encoding of vectors that reduces the expected distortion. k. Release 0. which is deﬁned as the sum of the squared distances between each observation vector and its dominating centroid. and one for green) before sending it over the web. we would expect many 24-bit blues to be represented by 8-bit codes. Instead of sending a 3-byte value for each pixel. ‘matrix’: interpret the k parameter as a k by M (or length k array for one-dimensional data) array of initial centroids. one for each of the k clusters.2. ‘uniform’. the k-means algorithm is not making sufﬁcient progress and terminates. By using a smaller 8-bit encoding. Since vector quantization is a natural application for k-means. and ‘matrix’: ‘random’: generate k centroids from a Gaussian with mean and variance estimated from the data. more ﬂesh tone colors would be represented in the code book. If the image of interest was of an ocean. a set of centroids. The result of k-means. All routines expect obs to be a M by N array where the rows are the observation vectors. ‘points’. As an example.0rc1 minit : string Method for initialization. the colors for each of the 256 possible 8-bit encoding values should be chosen to minimize distortion of the color. the 8-bit centroid index (or code word) of the dominating centroid is transmitted. One can also deﬁne a maximum number of iterations. If v belongs to i.cluster. It returns a set of centroids. we say centroid i is the dominating centroid of v. which ﬁlls up all possible 8-bit sequences. we can reduce the amount of data by two thirds. The code book is also sent over the wire so each 8-bit code can be translated back to a 24-bit pixel value representation. Ideally.vq) 163 . one for blue. label : ndarray label[i] is the code or index of the centroid the i’th observation is closest to. information theory terminology is often used. Each step of the k-means algorithm reﬁnes the choices of centroids to reduce distortion.SciPy Reference Guide. 4. If it was an image of a human face.10. The centroid index or cluster index is also referred to as a “code” and the table mapping codes to centroids and vice versa is often referred as a “code book”. and a set of observation vectors to cluster. An observation vector is classiﬁed with the cluster number or centroid index of the centroid closest to it. suppose we wish to compress a 24-bit color image (each pixel is represented by one byte for red. ‘uniform’: generate k observations from the data from a uniform distribution deﬁned by the data set (unsupported). The k-means algorithm tries to minimize distortion.

Parameters Z : ndarray The hierarchical clustering encoded with the matrix returned by the linkage function.hierarchy.3 Hierarchical clustering (scipy. criterion=’monocrit’. to threshold on the maximum mean distance as computed in the inconsistency matrix R with a threshold of 0. monocrit must be monotonic. depth=2. R. For example.SciPy Reference Guide.8 do: MR = maxRstat(Z. For example. 3) cluster(Z. fcluster(Z. t=0.8. (L.10. t. monocrit=None) Forms ﬂat clusters from the hierarchical clustering deﬁned by the linkage matrix Z. depth. M) = leaders(Z. T) Forms ﬂat clusters from the hierarchical clustering deﬁned by Cluster observation data using a given metric. criterion : str. T): scipy. every node is assigned to its own cluster. metric.cluster. criterion. R=None. Reference . t : ﬂoat The threshold to apply when forming ﬂat clusters.cluster. When no non-singleton cluster meets this criterion..fcluster(Z. R) 164 Chapter 4. criterion=’inconsistent’. criterion. r is minimized such that no more than t ﬂat clusters are formed.]) leaders(Z. monocrit=MR) ‘maxclust_monocrit’: Forms a ﬂat cluster from a non-singleton cluster node c when monocrit[i] <= r for all cluster indices i below and including c. (Default) ‘distance’: Forms ﬂat clusters so that the original observations in each ﬂat cluster have no greater a cophenetic distance than t. to minimize the threshold t on maximum inconsistency values so that no more than 3 ﬂat clusters are formed. This can be any of the following values: ‘inconsistent’: If a cluster node and all its descendants have an inconsistent value less than or equal to t then all its leaf descendants belong to the same ﬂat cluster.. t[. ‘monocrit’: Forms a ﬂat cluster from a cluster node c with index i when monocrit[j] <= t. t[. R. monocrit]) fclusterdata(X. ‘maxclust’: Finds a minimum threshold r so that the cophenetic distance between any two original observations in the same ﬂat cluster is no more than r and no more than t ﬂat clusters are formed. do: MI = maxinconsts(Z. Release 0.0rc1 4. optional The criterion to use in forming ﬂat clusters.hierarchy) These functions cut hierarchical clusterings into ﬂat clusterings or ﬁnd the roots of the forest formed by a cut by providing the ﬂat cluster ids of each observation. .

monocrit[i] >= monocrit[j]. given a node c with index i. criterion : str. depth=2. Parameters X : ndarray n by m data matrix with n observations in m dimensions. T[i] is the ﬂat cluster number to which original observation i belongs. i.0rc1 cluster(Z. optional The inconsistency matrix to use for the ‘inconsistent’ criterion. A one-dimensional array T of length n is returned.hierarchy. T[i] is the index of the ﬂat cluster to which the original observation i belongs. for all node indices j corresponding to nodes below c.3. optional Speciﬁes the criterion for forming ﬂat clusters. This matrix is computed if not provided. weighted.SciPy Reference Guide. optional 4. method : str. t : double.e. t : ﬂoat The threshold to apply when forming ﬂat clusters. Clusters the original observations in the n-by-m data matrix X (n observations in m dimensions). Hierarchical clustering (scipy. criterion=’inconsistent’. method=’single’. See linkage for more information. monocrit=MI) depth : int. It has no meaning for the other criteria. average. t. monocrit : ndarray. See distance. t=3. performs hierarchical clustering using the single linkage algorithm. median centroid. ward). Returns fcluster : ndarray An array of length n.fclusterdata(X. scipy. ‘distance’. complete. optional The distance metric for calculating pairwise distances.pdist for descriptions and linkage to verify compatibility with the linkage method. R : ndarray.hierarchy) 165 . The monocrit vector must be monotonic. Valid values are ‘inconsistent’ (default).10. See fcluster for descriptions. metric : str.cluster. Default is 2. optional An array of length n-1. criterion=’maxclust_monocrit’. monocrit[i] is the statistics upon which non-singleton i is thresholded. or ‘maxclust’ cluster formation algorithms. and forms ﬂat clusters using the inconsistency method with t as the cut-off threshold. optional The maximum depth to perform the inconsistency calculation. Default is “single”. using the euclidean distance metric to calculate distances between original observations.cluster. Release 0. metric=’euclidean’. R=None) Cluster observation data using a given metric. optional The linkage method to use (single.

T[p]==j for all p in S(i) where S(i) is the set of leaf ids of leaf nodes descendent with cluster node i) •there does not exist a leaf that is not descendent with i that also belongs to cluster j (i.0rc1 The cut-off threshold for the cluster function or the maximum number of clusters (criterion=’maxclust’). T is not a valid cluster assignment vector.e. If i < n. T[i] is the ﬂat cluster number to which original observation i belongs. It will be computed if necessary if it is not passed. For each ﬂat cluster j of the k ﬂat clusters represented in the n-sized ﬂat cluster assignment vector T. Reference . If this condition is violated. depth : int. otherwise it corresponds to a non-singleton cluster. See linkage for more information. Returns T : ndarray A vector of length n.hierarchy. Release 0.cluster. See inconsistent for more information. For example: if L[3]=2 and M[3]=8. M : ndarray 166 Chapter 4.10. M) with : L : ndarray The leader linkage node id’s stored as a k-element 1D array where k is the number of ﬂat clusters found in T. M) = leaders(Z. T : ndarray The ﬂat cluster assignment vector. Returns A tuple (L. and an exception will be thrown. Notes This function is similar to the MATLAB function clusterdata.leaders(Z.SciPy Reference Guide. See the fcluster function for more information on the format of T. T) (L. T): Returns the root nodes in a hierarchical clustering corresponding to a cut deﬁned by a ﬂat cluster assignment vector T. this function ﬁnds the lowest cluster node i in the linkage tree Z such that: •leaf descendents belong only to ﬂat cluster j (i. Parameters Z : ndarray The hierarchical clustering encoded as a matrix. R : ndarray. optional The maximum depth for the inconsistency calculation. i corresponds to an original observation. the ﬂat cluster with id 8’s leader is linkage node 2.e. L[j]=i is the linkage cluster node id that is the leader of ﬂat cluster with id M[j]. scipy. T[q]!=j for all q not in S(i)). optional The inconsistency matrix.

When only one cluster remains in the forest. method=’single’. . . metric]) single(y) complete(y) average(y) weighted(y) centroid(y) median(y) ward(y) Performs hierarchical/agglomerative clustering on the condensed distance matrix y. 1] is given by Z[i. Performs single/min/nearest linkage on the condensed distance matrix y. . . v[|v| − 1] in cluster v.0rc1 The leader linkage node id’s stored as a k-element 1D array where k is the number of ﬂat clusters found in T. See linkage for more Performs median/WPGMC linkage. . clusters with indices Z[i. This is also known by the Farthest Point Algorithm or Voor Hees Algorithm. The 2 behavior of this function is very similar to the MATLAB linkage function. Performs average/UPGMA linkage on the condensed distance matrix Performs weighted/WPGMA linkage on the condensed distance matrix Performs centroid/UPGMC linkage. . The following are methods for calculating the distance between the newly formed cluster u and each v. and u is added to the forest. metric=’euclidean’) Performs hierarchical/agglomerative clustering on the condensed distance matrix y.cluster. The fourth value Z[i. The algorithm begins with a forest of clusters that have yet to be used in the hierarchy being formed.SciPy Reference Guide. Let v be any remaining cluster in the forest that is not u. 0] and Z[i. The following linkage methods are used to compute the distance d(s. the algorithm stops. 2].cluster. 3] represents the number of original observations in the newly formed cluster. the algorithm must update the distance matrix to reﬂect the distance of the newly formed cluster u with the remaining clusters in the forest. The distance between clusters Z[i. See linkage for more Performs Ward’s linkage on a condensed or redundant distance scipy.j] entry corresponds to the distance between cluster i and j in the original forest. Release 0. A cluster with an index less than n corresponds to one of the n original observations. Performs complete complete/max/farthest point linkage on the condensed distance matrix y. This allows the set of ﬂat cluster ids to be any arbitrary set of k integers. Suppose there are |u| original observations u[0]. . method. 1] are combined to form cluster n + i. s and t are removed from the forest. When two clusters s and t from this forest are combined into a single cluster u. A distance matrix is maintained at each iteration. These are routines for agglomerative clustering. v[j])) for all points i in cluster u and j in cluster v. v[j])) for all points i in cluster u and j in cluster v. At each iteration. Recall s and t are combined to form cluster u.3. and this cluster becomes the root. The d[i. v) = min(dist(u[i]. 0] and Z[i. linkage(y[. t) between two clusters s and t.10. 4. •method=’complete’ assigns d(u. . v) = max(dist(u[i].linkage(y. •method=’single’ assigns d(u.hierarchy) 167 . At the i-th iteration. Hierarchical clustering (scipy.hierarchy. This is also known as the Nearest Point Algorithm. A 4 by (n − 1) matrix Z is returned. y must be a n sized vector where n is the number of original observations paired in the distance matrix. u[|u| − 1] in cluster u and |v| original objects v[0].

A condensed distance matrix is a ﬂat array containing the upper triangular of the distance matrix. v is an unused cluster in the forest. 168 Chapter 4. See the Linkage Methods section below for full descriptions. The distance then becomes the Euclidean distance between the centroid of u and the centroid of a remaining cluster v in the forest. Parameters y : ndarray A condensed or redundant distance matrix. This is also known as the incremental algorithm. d(u. •method=’ward’ uses the Ward variance minimization algorithm. optional The distance metric to use. the average of centroids s and t give the new centroid u. See the distance. t)2 + d(s. t) = ||cs − ct ||2 where cs and ct are the centroids of clusters s and t. method : str. respectively. (also called WPGMA) •method=’centroid’ assigns dist(s. and | ∗ | is the cardinality of its argument.pdist function for a list of valid distance metrics. v))/2 where cluster u was formed with cluster s and t and v is a remaining cluster in the forest. Alternatively. s)2 + d(v. v) = (dist(s. v) + dist(t. the new centroid is computed over all the original objects in clusters s and t. v) = |v| + |t| |v| |v| + |s| d(v. Reference . This implementation may chose a different minimum than the MATLAB version. •method=’median’ assigns math:d(s. This is also called the UPGMA algorithm. there may be two or more pairs with the same minimum distance. metric : str. This is called UPGMA.t) like the centroid method. This is also known as the UPGMC algorithm. v) = ij d(u[i]. respectively.10. When two clusters s and t are combined into a new cluster u. When two clusters s and t are combined into a new cluster u. a collection of m observation vectors in n dimensions may be passed as an m by n array.0rc1 •method=’average’ assigns d(u. Warning: When the minimum distance pair in the forest is chosen.SciPy Reference Guide. optional The linkage algorithm to use. The new entry d(u. v) is computed as follows. T = |v| + |s| + |t|. v[j]) (|u| ∗ |v|) for all points i and j where |u| and |v| are the cardinalities of clusters u and v. This is also known as the WPGMC algorithm. t)2 T T T where u is the newly joined cluster consisting of clusters s and t. Release 0. •method=’weighted’ assigns d(u. This is the form that pdist returns.

Returns Z : ndarray A linkage matrix containing the hierarchical clustering.0rc1 Returns Z : ndarray The hierarchical clustering encoded as a linkage matrix. scipy. Returns Z : ndarray A linkage matrix containing the hierarchical clustering. See the linkage function documentation for more information on its structure. Parameters y : ndarray The upper triangular of the distance matrix.hierarchy. See linkage for more information on the return structure and algorithm. See Also: linkage for advanced creation of hierarchical clusterings. scipy.3. Returns Z : ndarray The linkage matrix.cluster. Release 0.single(y) Performs single/min/nearest linkage on the condensed distance matrix y.average(y) Performs average/UPGMA linkage on the condensed distance matrix y.complete(y) Performs complete complete/max/farthest point linkage on the condensed distance matrix y.hierarchy) 169 . The result of pdist is returned in this form. See Also: linkage for advanced creation of hierarchical clusterings.SciPy Reference Guide.cluster.10.hierarchy. See the linkage function documentation for more information on its structure. scipy.cluster.hierarchy. See linkage for more information on the return structure and algorithm. The result of pdist is returned in this form. Parameters y : ndarray The upper triangular of the distance matrix. 4. See linkage for more information on the return structure and algorithm. Hierarchical clustering (scipy.cluster. Parameters y : ndarray The upper triangular of the distance matrix. The result of pdist is returned in this form.

See the linkage function documentation for more information on its structure.cluster. 2.centroid(y) Performs centroid/UPGMC linkage.Z = centroid(X) Performs centroid/UPGMC linkage on the observation matrix X using Euclidean distance as the distance metric. Release 0. Reference . See Also: linkage for advanced creation of hierarchical clusterings. A condensed distance matrix is a ﬂat array containing the upper triangular of the distance matrix.hierarchy. The result of pdist is returned in this form.SciPy Reference Guide. a collection of m observation vectors in n dimensions may be passed as a m by n array.Z = centroid(y) Performs centroid/UPGMC linkage on the condensed distance matrix y. The following are common calling conventions: 1. See linkage for more information on the return structure and algorithm. See linkage for more information on the return structure and algorithm.median(y) Performs median/WPGMC linkage.hierarchy.0rc1 scipy.weighted(y) Performs weighted/WPGMA linkage on the condensed distance matrix y. scipy. See linkage for more information on the return structure and algorithm.cluster. Returns Z : ndarray A linkage matrix containing the hierarchical clustering. Alternatively. This is the form that pdist returns. See linkage for more information on the return structure and algorithm. See Also: linkage for advanced creation of hierarchical clusterings. See linkage for more information on the return structure and algorithm.cluster. Returns Z : ndarray A linkage matrix containing the hierarchical clustering. Parameters y : ndarray The upper triangular of the distance matrix. Parameters Q : ndarray A condensed or redundant distance matrix.10. scipy. See the linkage function documentation for more information on its structure. 170 Chapter 4.hierarchy.

Release 0.3. See linkage for more information on the return structure and algorithm. Parameters Q : ndarray A condensed or redundant distance matrix. See linkage for more information on the return structure and algorithm. This is the form that pdist returns. Parameters Q : ndarray A condensed or redundant distance matrix.10.hierarchy) 171 .Z = ward(X) Performs Ward’s linkage on the observation matrix X using Euclidean distance as the distance metric. Alternatively. Returns Z : ndarray The hierarchical clustering encoded as a linkage matrix. 2. Hierarchical clustering (scipy. a collection of m observation vectors in n dimensions may be passed as a m by n array. a collection of m observation vectors in n dimensions may be passed as a m by n array. A condensed distance matrix is a ﬂat array containing the upper triangular of the distance matrix. Returns Z : ndarray The hierarchical clustering encoded as a linkage matrix. See linkage for more information on the return structure and algorithm. This is the form that pdist returns.0rc1 The following are common calling conventions: 1. See linkage for more information on the return structure and algorithm.cluster. scipy. See linkage for more information on the return structure and algorithm. 2.ward(y) Performs Ward’s linkage on a condensed or redundant distance matrix.hierarchy.cluster. A condensed distance matrix is a ﬂat array containing the upper triangular of the distance matrix. Alternatively. The following are common calling conventions: 1. 4.Z = ward(y) Performs Ward’s linkage on the condensed distance matrix Z.Z = median(y) Performs median/WPGMC linkage on the condensed distance matrix y. See Also: linkage for advanced creation of hierarchical clusterings.SciPy Reference Guide. See Also: linkage for advanced creation of hierarchical clusterings.Z = median(X) Performs median/WPGMC linkage on the observation matrix X using Euclidean distance as the distance metric.

.(N-1) form.0rc1 These routines compute statistics on hierarchies. Y=None) Calculates the cophenetic distances between each observation in the hierarchical clustering deﬁned by the linkage Z.SciPy Reference Guide. i) Returns the maximum statistic for each non-singleton cluster and its descendents.hierarchy. Y is the condensed distance matrix from which Z was generated. {d}): • c [ndarray] The cophentic correlation distance (if y is passed). maxdists(Z) Returns the maximum distance between any cluster for each non-singleton cluster. Parameters Z : ndarray The hierarchical clustering encoded as an array (see linkage function).10. d]) maxinconsts(Z. The ij th entry is the cophenetic distance between original observations i and j. Y]) Calculates the cophenetic distances between each observation in from_mlab_linkage(Z) Converts a linkage matrix generated by MATLAB(TM) to a new inconsistent(Z[. This function is useful when loading in linkages from legacy data ﬁles generated by MATLAB. • d [ndarray] The cophenetic distance matrix in condensed form.from_mlab_linkage(Z) Converts a linkage matrix generated by MATLAB(TM) to a new linkage matrix compatible with this module. to_mlab_linkage(Z) Converts a linkage matrix Z generated by the linkage function scipy.cluster.. respectively and s and t are joined by a direct parent cluster u. Returns res : tuple A tuple (c. Parameters Z : ndarray A linkage matrix generated by MATLAB(TM).hierarchy.cophenet(Z.3] is added where Z[i. Reference . Returns ZS : ndarray 172 Chapter 4. The conversion does two things: •the indices are converted from 1.cluster. cophenet(Z[. maxRstat(Z.3] is represents the number of original observations (leaves) in the non-singleton cluster i. Release 0. and •a fourth column Z[:. R. Y : ndarray (optional) Calculates the cophenetic correlation coefﬁcient c of a hierarchical clustering deﬁned by the linkage matrix Z of a set of n observations in m dimensions. R) Returns the maximum inconsistency coefﬁcient for each non-singleton cluster and its descendents. Suppose p and q are original observations in disjoint clusters s and t. scipy. Calculates inconsistency statistics on a linkage. The cophenetic distance between observations i and j is simply the distance between clusters s and t.N to 0.

d=2) Calculates inconsistency statistics on a linkage.maxdists(Z) Returns the maximum distance between any cluster for each non-singleton cluster. Parameters Z : ndarray The hierarchical clustering encoded as a matrix.cluster. More speciﬁcally.hierarchy. See 4.cluster. Returns R : ndarray A (n − 1) by 5 matrix where the i‘th row contains the link statistics for the nonsingleton cluster i.inconsistent(Z. scipy. racZ[i. 0]R[i. scipy. Release 0. Hierarchical clustering (scipy.10. Z : ndarray The (n − 1) by 4 matrix encoding the linkage (hierarchical clustering).0] and R[i.0rc1 A linkage matrix compatible with this library. and R[i.SciPy Reference Guide.max() where Q(i) is the set of all node indices below and including node i. Returns MD : ndarray A (n-1) sized numpy array of doubles. MD[i] = Z[Q(i)-n. R[i.1] are the mean and standard deviation of the link heights. The link statistics are computed over the link heights for links d levels below the cluster i. R[i.maxinconsts(Z. Parameters d : int The number of links up to d levels below each non-singleton cluster.hierarchy. scipy. respectively. linkage documentation for more information on its form.cluster. R : ndarray The inconsistency matrix. MD[i] represents the maximum distance between any cluster (including singletons) below and including the node with index i.hierarchy) 173 . See linkage for more information. 1].hierarchy.3] is the inconsistency coefﬁcient.3. Returns MI : ndarray A monotonic (n-1)-sized numpy array of doubles. Parameters Z : ndarray The hierarchical clustering encoded as a matrix.2] is the number of links included in the calculation. 2] − R[i. R) Returns the maximum inconsistency coefﬁcient for each non-singleton cluster and its descendents.cluster. See linkage for more information. 2]. Note: This function behaves similarly to the MATLAB(TM) inconsistent function.

hierarchy. show_contracted=False.. Release 0. otherwise crossings appear in the dendrogram. Parameters Z : ndarray The hierarchical clustering encoded as a matrix. leaf_font_size=None. show_leaf_counts=True. labels=None. . orientation=’top’. no_plot=False. no_labels=False.N indexing. link_color_func=None) Plots the hierarchical clustering as a dendrogram. color_threshold=None. leaf_label_func=None.hierarchy.SciPy Reference Guide. The height of the top of the U-link is the distance between its children clusters.cluster.to_mlab_linkage(Z) Converts a linkage matrix Z generated by the linkage function of this module to a MATLAB(TM) compatible one.cluster. MR[j] is the maximum over R[Q(j)-n. R. 174 Chapter 4. i : int The column of R to use as the statistic. Returns ZM : ndarray A linkage matrix compatible with MATLAB(TM)’s hierarchical clustering functions. truncate_mode=None. scipy.2] be monotonic. Reference .0rc1 scipy. Routines for visualizing ﬂat clusters. Parameters Z : ndarray A linkage matrix generated by this library. count_sort=False. It is expected that the distances in Z[:.10. color_list=None. i] where Q(j) the set of all node ids corresponding to nodes below and including j. get_leaves=True.cluster. i) Returns the maximum statistic for each non-singleton cluster and its descendents. dendrogram(Z[. p=30. Returns MR : ndarray Calculates the maximum statistic for the i’th column of the inconsistency matrix R for each non-singleton cluster node. truncate_mode.hierarchy.maxRstat(Z. p. distance_sort=False.]) Plots the hierarchical clustering as a dendrogram.dendrogram(Z. Parameters Z : ndarray The linkage matrix encoding the hierarchical clustering to render as a dendrogram. no_leaves=False.. It is also the cophenetic distance between original observations in the two children clusters. The return linkage matrix has the last column removed and the cluster indices are converted to 1. scipy. See the linkage function for more information on the format of Z.. See linkage for more information. The dendrogram illustrates how each cluster is composed by drawing a U-shaped link between a non-singleton cluster and its children. leaf_rotation=None. R : ndarray The inconsistency matrix.

let t be the color_threshold. optional The dendrogram can be hard to read when the original observation matrix from which the linkage is derived is large. this is an n -sized list (or tuple). and plot descendent links going upwards. truncate_mode : str. • ‘left’. optional For each node n. Colors all the descendent links below a cluster node k the same color if k is the ﬁrst node below the cut threshold t. get_leaves : bool. H[i] == j. which can be any of the following values: • False: nothing is done. The labels[i] value is the text to put under the i th leaf node only if it corresponds to an original observation and not a non-singleton cluster.3. where j < 2n − 1 and i < n. There are several modes: • None/’none’: no truncation is performed (Default) • ‘lastp’: the last p non-singleton formed in the linkage are the only non-leaf nodes in the linkage. and plot descendent links going right. 4. all nodes are colored blue. Release 0.2]).plots the root at the bottom. (default). optional The p parameter for truncate_mode. optional By default labels is None so the index of the original observation is used to label the leaf nodes. and plot descendent links going left.SciPy Reference Guide.10.hierarchy) 175 . color_threshold : double. from left-to-right) n’s two descendent links are plotted is determined by this parameter. All links connecting nodes with distances greater than or equal to the threshold are colored blue. optional The direction to plot the dendrogram. which can be any of the following strings: • ‘top’ plots the root at the top. Hierarchical clustering (scipy. • ‘bottom’. Otherwise.cluster.0rc1 p : int. • ‘mlab’: This corresponds to MATLAB(TM) behavior. labels : ndarray. orientation : str. • ‘right’. • ‘ascending’/True: the child with the minimum number of original objects in its cluster is plotted ﬁrst.7*max(Z[:. For each i. corresponding with MATLAB(TM) behavior. the order (visually. Truncation is used to condense the dendrogram. optional For brevity. If t is less than or equal to zero. the threshold is set to 0. (not implemented yet) • ‘level’/’mtica’: no more than p levels of the dendrogram tree are displayed.plots the root at the right. All other non-singleton clusters are contracted into leaf nodes. count_sort : str or bool.plots the root at the left. If color_threshold is None or ‘default’. they correspond to to rows Z[n-p-2:end] in Z. This corresponds to Mathematica(TM) behavior. cluster node j appears in the i th position in the left-to-right traversal of the leaves. and plot descendent links going downwards. optional Includes a list R[’leaves’]=H in the result dictionary.

no_plot : bool.2f]’ % (id. and inconsistency coefﬁcient. no_labels : bool. When unspeciﬁed. simply do: # First define the leaf label function.10. The function is expected to return a string with the label for the leaf. optional For each node n. Note distance_sort and count_sort cannot both be True.SciPy Reference Guide. from left-to-right) n’s two descendent links are plotted is determined by this parameter. optional Speciﬁes the font size (in points) of the leaf labels. no labels appear next to the leaf nodes in the rendering of the dendrogram. show_leaf_counts : bool. count. which can be any of the following values: • False: nothing is done.3]) # The text for the leaf nodes is going to be big so force 176 Chapter 4. When unspeciﬁed. optional Speciﬁes the angle (in degrees) to rotate the leaf labels. R[n-id. For example. optional When True. Note distance_sort and count_sort cannot both be True. the rotation based on the number of nodes in the dendrogram. for each leaf with cluster index k < 2n − 1. optional When True. leaf nodes representing k > 1 original observation are labeled with the number of observations they contain in parentheses. to label singletons with their node id and non-singletons with their id.0rc1 • ‘descendent’: the child with the maximum number of original objects in its cluster is plotted ﬁrst. distance_sort : str or bool. (Default=0) leaf_font_size : int. the ﬁnal rendering is not performed. optional When leaf_label_func is a callable function. Release 0. This is useful if only the data structures computed for the rendering are needed or if matplotlib is not available. Reference . leaf_label_rotation : double. def llf(id): if id < n: return str(id) else: return ’[%d %d %1. leaf_label_func : lambda or function. the size based on the number of nodes in the dendrogram. the order (visually. optional When True. count. Indices k < n correspond to original observations while indices k ≥ n correspond to non-singleton clusters. • ‘descending’: the child with the maximum distance between its direct descendents is plotted ﬁrst. • ‘ascending’/True: the child with the minimum distance between its direct descendents is plotted ﬁrst.

ClusterNode(id[. count=1) A tree node class for representing a cluster.. Hierarchical clustering (scipy. right=None.SciPy Reference Guide. left.. Release 0. . leaf_label_func=llf. This really is only useful when truncation is used (see truncate_mode parameter). right.3. The to_tree function converts a matrix returned by the linkage function into an easy-to-use tree representation. left=None. count]) leaves_list(Z) to_tree(Z[. Returns R : dict A dictionary of data structures computed to render the dendrogram. Leaf nodes correspond to original observations. dist=0.cluster.hierarchy) 177 . • ‘dcoords’: a list of lists [I2. Returns a list of leaf node ids (corresponding to observation vector index) as they appear in the tree from left to right. dist. while non-leaf nodes correspond to non-singleton clusters. it corresponds to a non-singleton cluster. Ip] where Ik is a list of 4 independent variable coordinates corresponding to the line that represents the k’th link painted. Its has the following keys: • ‘icoords’: a list of lists [I1. • ‘ivl’: a list of labels corresponding to the leaf nodes.0rc1 # a rotation of 90 degrees. H[i] == j. See Also: 4. I2. link_color_func=lambda k: colors[k]) colors the direct links below each untruncated non-singleton node k using colors[k]. dendrogram(Z. Converts a hierarchical clustering encoded in the matrix Z (by class scipy. . For example: >>> dendrogram(Z. The function is expected to return the color to paint the link. encoded as a matplotlib color string code. cluster node j appears in the i th position in the left-to-right traversal of the leaves.ClusterNode(id.10. I2. link_color_func : lambda/function When a callable function.. the i th leaf node corresponds to an original observation. These are data structures and routines for representing hierarchies as tree objects. Otherwise..cluster. leaf_rotation=90) show_contracted : bool When True the heights of non-singleton nodes contracted into a leaf node are plotted as crosses along the link connecting that leaf node.. rd]) A tree node class for representing a cluster. If j is less than n. • ‘leaves’: for each i.hierarchy. Ip] where Ik is a list of 4 independent variable coordinates corresponding to the line that represents the k’th link painted. link_color_function is called with each non-singleton id corresponding to each U-shaped link it will paint.. where j < 2n − 1 and i < n.

See linkage for more information.cluster. Parameters Z : ndarray The hierarchical clustering encoded as a matrix.leaves_list(Z) Returns a list of leaf node ids (corresponding to observation vector index) as they appear in the tree from left to right. then it corresponds to a singleton cluster (leaf node). Note: This function is provided for the convenience of the library user. These are predicates for checking the validity of linkage and inconsistency matrices as well as for checking isomorphism of two ﬂat cluster assignments. Each ClusterNode object has a left.SciPy Reference Guide. dist.d) is returned. a reference to the root ClusterNode object is returned. id. Returns L : list The pre-order traversal. Returns L : ndarray The list of leaf node ids. a tuple (r. The reference r to the root ClusterNode object is returned. 178 Chapter 4. scipy.to_tree(Z. right. r is a reference to the root node while d is a dictionary mapping cluster ids to ClusterNode references.hierarchy. rd : bool.hierarchy. its count must be 1. Methods get_count get_id get_left get_right is_leaf pre_order scipy. and count attribute. If both are None then the ClusterNode object is a leaf node. Release 0. optional When False. Z is a linkage matrix. and its distance is meaningless but set to 0. Reference . ClusterNodes are not used as input to any of the functions in this library. See linkage for more information on the assignment of cluster ids to clusters. Otherwise. If a cluster id is less than n. rd=False) Converts a hierarchical clustering encoded in the matrix Z (by linkage) into an easy-to-use tree object. The left and right attributes point to ClusterNode objects that were combined to generate the cluster.0rc1 to_tree for converting a linkage matrix Z into a tree object.cluster. Parameters Z : ndarray The linkage matrix in proper form (see the linkage function documentation).10.

is_valid_im(R. throws a Python exception if the linkage matrix passed is invalid. throw : bool. 4.hierarchy. The link counts R[:. optional When True.0rc1 is_valid_im(R[. It must be a n by 4 numpy array of doubles. optional When True.hierarchy) 179 . optional When True.1] must be nonnegative.2] must be positive and no greater than n − 1. optional This string refers to the variable name of the invalid linkage matrix. The linkage Checks if a linkage matrix Z and condensed distance matrix Returns the number of original observations of the linkage matrix passed. Returns b : bool True if the inconsistency matrix is valid. Release 0. scipy. throws a Python exception if the linkage matrix passed is invalid.3. scipy. T2) is_monotonic(Z) correspond(Z. A linkage matrix is valid if it is a two dimensional nd-array (type double) with n rows and 4 columns. Y) num_obs_linkage(Z) Returns True if the inconsistency matrix passed is valid. throw : bool. a cluster cannot join another cluster unless the cluster being joined has been generated. warning. The ﬁrst two columns must contain indices between 0 and 2n − 1. issues a Python warning if the linkage matrix passed is invalid. 0] ≤ i + n − 1 and 0 ≤ Z[i. Hierarchical clustering (scipy. Returns b : bool True iff the inconsistency matrix is valid. name=None) Checks the validity of a linkage matrix.cluster.e. warning=False. For a given row i. Checks the validity of a linkage matrix.cluster. throw=False. optional When True. warning=False. throw.SciPy Reference Guide.) Parameters Z : array_like Linkage matrix.cluster. 0 ≤ Z[i. warning : bool. name]) is_isomorphic(T1.hierarchy. Determines if two different cluster assignments T1 and Returns True if the linkage passed is monotonic.is_valid_linkage(Z. name=None) Returns True if the inconsistency matrix passed is valid. The standard deviations R[:. issues a Python warning if the linkage matrix passed is invalid. warning. throw=False. name]) is_valid_linkage(Z[. warning : bool. name : str. name : str. Parameters R : ndarray The inconsistency matrix to check for validity. 1] ≤ i + n − 1 (i. optional This string refers to the variable name of the invalid linkage matrix. throw.10.

0rc1 scipy.is_monotonic(Z) Returns True if the linkage passed is monotonic. 180 Chapter 4.hierarchy. the distance between them is no less than the distance between any previously joined clusters. scipy.cluster. Parameters T1 : ndarray An assignment of singleton cluster ids to ﬂat cluster ids. Returns b : bool A boolean indicating whether the linkage is monotonic. scipy. scipy. Release 0. Parameters Z : ndarray The linkage matrix to check for correspondance.num_obs_linkage(Z) Returns the number of original observations of the linkage matrix passed. Parameters Z : ndarray The linkage matrix to check for monotonicity. This function is useful as a sanity check in algorithms that make extensive use of linkage and distance matrices that must correspond to the same set of original observations.cluster.hierarchy. T2 : ndarray An assignment of singleton cluster ids to ﬂat cluster ids.is_isomorphic(T1.correspond(Z.hierarchy. Y) Checks if a linkage matrix Z and condensed distance matrix Y could possibly correspond to one another. Parameters Z : ndarray The linkage matrix on which to perform the operation.hierarchy. Y : ndarray The condensed distance matrix to check for correspondance.SciPy Reference Guide. Reference . They must have the same number of original observations for the check to succeed.cluster.10.cluster. Returns b : bool A boolean indicating whether the linkage matrix and distance matrix could possibly correspond to one another. The linkage is monotonic if for every cluster s and t joined. T2) Determines if two different cluster assignments T1 and T2 are equivalent. Returns b : bool Whether the ﬂat cluster assignments T1 and T2 are equivalent.

4.constants) Physical and mathematical constants and units. New BSD License.3. 4. 4.cluster. scipy. • Mathematica is a registered trademark of The Wolfram Research.10. 2007-2008. 4.1 Mathematical constants pi golden Pi Golden ratio 4.constants) 181 .4. The order of the color codes is the order in which the colors are cycled through when color thresholding in the dendrogram.3.0rc1 Returns n : int The number of original observations in the linkage. Inc.set_link_color_palette(palette) Changes the list of matplotlib color codes to use when coloring links with the dendrogram color_threshold feature.hierarchy.SciPy Reference Guide. 4. Constants (scipy.2 Copyright Notice Copyright (C) Damian Eads.4 Constants (scipy.1 References • MATLAB and MathWorks are registered trademarks of The MathWorks. Utility routines for plotting: set_link_color_palette(palette) the list of matplotlib color codes to use when coloring links with the Changes dendrogram color_threshold feature. Release 0. Parameters palette : A list of matplotlib color codes. Inc.

2 Physical constants c mu_0 epsilon_0 h hbar G g e R alpha N_A k sigma Wien Rydberg m_e m_p m_n speed of light in vacuum the magnetic constant µ0 the electric constant (vacuum permittivity). disp]) ConstantWarning Value in physical_constants indexed by key Unit in physical_constants indexed by key Relative precision in physical_constants indexed by key Return list of codata.4. Reference .value(’elementary charge’) 1. scipy.constants import codata >>> codata. does not itself possess a docstring.0rc1 4. Examples >>> from scipy. as a dictionary literal object.602176487e-019 182 Chapter 4. the Planck constant h ¯ = h/(2π) h Newtonian constant of gravitation standard acceleration of gravity elementary charge molar gas constant ﬁne-structure constant Avogadro constant Boltzmann constant Stefan-Boltzmann constant σ Wien displacement law constant Rydberg constant electron mass proton mass neutron mass 0 Constants database In addition to the above variables. which. Release 0.constants.constants also contains the 2010 CODATA recommended values [CODATA2010] database containing more physical constants.value(key) Value in physical_constants indexed by key Parameters key : Python string or unicode Key in dictionary physical_constants Returns value : ﬂoat Value in physical_constants corresponding to key See Also: codata Contains the description of physical_constants.SciPy Reference Guide. value(key) unit(key) precision(key) find([sub.physical_constant keys containing a given string. Accessing a constant no longer in current CODATA data set scipy.10.

constants import codata >>> codata. By default. Parameters sub : str.constants. Release 0.constants. which.SciPy Reference Guide. disp : bool If True. 4.physical_constant keys containing a given string. Examples >>> from scipy. return the list of keys without printing anything.constants) 183 .unit(u’proton mass’) ’kg’ scipy. does not itself possess a docstring. as a dictionary literal object.constants import codata >>> codata. Constants (scipy. and return None. return all keys.0rc1 scipy.precision(u’proton mass’) 4.find(sub=None.96226989798e-08 scipy. as a dictionary literal object. print the keys that are found. which.unit(key) Unit in physical_constants indexed by key Parameters key : Python string or unicode Key in dictionary physical_constants Returns unit : Python string Unit in physical_constants corresponding to key See Also: codata Contains the description of physical_constants. unicode Sub-string to search keys for. does not itself possess a docstring. Otherwise. disp=False) Return list of codata.4. Examples >>> from scipy.precision(key) Relative precision in physical_constants indexed by key Parameters key : Python string or unicode Key in dictionary physical_constants Returns prec : ﬂoat Relative precision in physical_constants corresponding to key See Also: codata Contains the description of physical_constants.10.constants.

which. See Also: codata Contains the description of physical_constants. Release 0. the list of keys is returned.2523427168e+23 Hz 7.0rc1 Returns keys : list or None If disp is False. of the format physical_constants[name] = (value.00400150617912 kg mol^-1 7294.64465675e-27 kg 5.602176565e-19 C 1.2917721092e-11 m 1.97259968933 1.23538054e-65 C^4 m^4 J^-3 1.0 V m^-1 9. dipole mom. exception scipy.35974434e-18 J 8.054571726e-34 J s 1.854801936e-23 J T^-1 235051. as a dictionary literal object.0 eV 34231776. atomic unit of electric field atomic unit of electric field gradient atomic unit of electric polarizability atomic unit of electric potential atomic unit of electric quadrupole mom.492417954e-10 J 1.7464 T Continued on next page 184 Chapter 4.ConstantWarning Accessing a constant no longer in current CODATA data set scipy.constants.SciPy Reference Guide.97191967e-10 J 3727.492417954e-10 J 931.00150617913 u 0.660538921e-27 kg 3.00662361795 A 8. None is returned.08095408e+13 K 1.845 E_h 2.constants. Reference . Otherwise.660538921e-27 kg 1.494061 MeV 931494061. uncertainty).2995361 3.081202338e+12 C m^-3 0.10.23872278e-08 N 5.21138505 V 4.6487772754e-41 C^2 m^2 J^-1 27.47835326e-30 C m 514220652000.37924 MeV 4. does not itself possess a docstring.5130066042e+14 m^-1 1.486551331e-40 C m^2 4. flux density 6.206361449e-53 C^3 m^3 J^-2 6. atomic unit of energy atomic unit of force atomic unit of length atomic unit of mag. atomic unit of mag.physical_constants Dictionary of physical constants. Available constants: alpha particle mass alpha particle mass energy equivalent alpha particle mass energy equivalent in MeV alpha particle mass in u alpha particle molar mass alpha particle-electron mass ratio alpha particle-proton mass ratio Angstrom star atomic mass constant atomic mass constant energy equivalent atomic mass constant energy equivalent in MeV atomic mass unit-electron volt relationship atomic mass unit-hartree relationship atomic mass unit-hertz relationship atomic mass unit-inverse meter relationship atomic mass unit-joule relationship atomic mass unit-kelvin relationship atomic mass unit-kilogram relationship atomic unit of 1st hyperpolarizability atomic unit of 2nd hyperpolarizability atomic unit of action atomic unit of charge atomic unit of charge density atomic unit of current atomic unit of electric dipole mom. unit.717362e+21 V m^-2 1.00001495e-10 m 1.

constants) 185 . deuteron mag.01355321271 u 0. mom.99900750097 8.00231930436 176085970800.4829652 -0. ratio deuteron-proton mag.8179403267e-15 m 2.8574382308 3.10938291e-31 kg 8. Release 0.8574382308 4.2917721092e-11 m 1.730313462 ohm 2. ratio deuteron-electron mass ratio deuteron-neutron mag.503476 m^-1 K^-1 376.3806488e-23 J K^-1 8.26379 m s^-1 6.1424e-15 m -0.0 Hz T^-1 46.33073489e-27 J T^-1 0.um atomic unit of permittivity atomic unit of time atomic unit of velocity Avogadro constant Bohr magneton Bohr magneton in eV/T Bohr magneton in Hz/T Bohr magneton in inverse meters per tesla Bohr magneton in K/T Bohr radius Boltzmann constant Boltzmann constant in eV/K Boltzmann constant in Hz/K Boltzmann constant in inverse meters per kelvin characteristic impedance of vacuum classical electron radius Compton wavelength Compton wavelength over 2 pi conductance quantum conventional value of Josephson constant conventional value of von Klitzing constant Cu x unit deuteron g factor deuteron mag. ratio electron gyromag.4188843265e-17 s 2187691.00506297e-10 J 1875.835979e+14 Hz V^-1 25812.95266 MHz T^-1 -9. mom.0004669754556 0.807 ohm 1.27400968e-24 J T^-1 5.0 s^-1 T^-1 28024.10.2847643e-24 J T^-1 0.1 – continued from previous page atomic unit of magnetizability atomic unit of mass atomic unit of mom.18710506e-14 J 0.0004664345537 3670. anomaly electron mag.7480917346e-05 S 4. ratio deuteron-proton mass ratio electric constant electron charge to mass quotient electron g factor electron gyromag. mom. to Bohr magneton ratio electron mag.2819709 9.6173324e-05 eV K^-1 20836618000.99285174e-24 kg m s^-1 1. to nuclear magneton ratio deuteron mass deuteron mass energy equivalent deuteron mass energy equivalent in MeV deuteron mass in u deuteron molar mass deuteron rms charge radius deuteron-electron mag.34358348e-27 kg 3. electron mag. ratio over 2 pi electron mag.00207697e-13 m 0. mom.86159268e-13 m 7.0rc1 Table 4. mom. mom.891036607e-29 J T^-2 9.307012207 1. mom.SciPy Reference Guide.67171388 K T^-1 5. to Bohr magneton ratio deuteron mag.4263102389e-12 m 3.6864498 m^-1 T^-1 0. mom.612859 MeV 2.11265005605e-10 F m^-1 2.44820652 0.4.510998928 MeV Continued on next page 4.00115965218 -1838.00115965218076 -1. mom.7883818066e-05 eV T^-1 13996245550.00201355321271 kg mol^-1 2.02214129e+23 mol^-1 9. mom.0 Hz K^-1 69. Constants (scipy.85418781762e-12 F m^-1 -175882008800.10938291e-31 kg 1.0 C kg^-1 -2. to nuclear magneton ratio electron mass electron mass energy equivalent electron mass energy equivalent in MeV 7.

ratio electron-proton mass ratio electron-tau mass ratio electron-triton mass ratio elementary charge elementary charge over h Faraday constant Faraday constant for conventional electric current Fermi coupling constant fine-structure constant first radiation constant first radiation constant for spectral radiance Hartree energy Hartree energy in eV hartree-atomic mass unit relationship hartree-electron volt relationship hartree-hertz relationship hartree-inverse meter relationship hartree-joule relationship hartree-kelvin relationship hartree-kilogram relationship helion g factor helion mag.2275971 1.03674932379 E_h 2.1371 m^-1 4.35974434e-18 J 27. mom.3365 C mol^-1 96485. mom.417989348e+14 A J^-1 96485.391482 MeV 3.21138505 eV 2.058257 -658.9931526707 Continued on next page 186 Chapter 4. mom.001158740958 -2.85086979e-35 kg -4.9212623246e-08 u 27.8852754 2.4857990946e-07 kg mol^-1 0.9205 0.00483633166 960.0030149322468 kg mol^-1 5495.00018192000653 1.00027244371095 0.49953902e-10 J 2808.074617486e-26 J T^-1 -0. mom.00641234e-27 kg 4.35974434e-18 J 315775.00054386734461 -658.2106848 0. ratio electron-deuteron mass ratio electron-helion mass ratio electron-muon mag. ratio electron-neutron mass ratio electron-proton mag. to Bohr magneton ratio helion mag.000137093355578 864.SciPy Reference Guide. mom. Reference . ratio electron to shielded proton mag. to nuclear magneton ratio helion mass helion mass energy equivalent helion mass energy equivalent in MeV helion mass in u helion molar mass helion-electron mass ratio helion-proton mass ratio 0. Release 0.07354415e-09 u 0.0rc1 Table 4.191042869e-16 W m^2 sr^-1 4.10.3321 C_90 mol^-1 1. helion mag.57968392073e+15 Hz 21947463.602176565e-19 C 2. mom.00018195430761 206.00054857990946 u 5.429 m^-1 1.417989348e+14 Hz 806554.602176565e-19 J 11604.923498 0.255250613 -1. ratio electron volt electron volt-atomic mass unit relationship electron volt-hartree relationship electron volt-hertz relationship electron volt-inverse meter relationship electron volt-joule relationship electron volt-kelvin relationship electron volt-kilogram relationship electron-deuteron mag. ratio electron-muon mass ratio electron-neutron mag.000287592 0.7669896 0.74177153e-16 W m^2 1.00054461702178 0.0149322468 u 0. mom.0072973525698 3.1 – continued from previous page electron mass in u electron molar mass electron to alpha particle mass ratio electron to shielded helion mag.21138505 eV 6.782661845e-36 kg -2143.166364e-05 GeV^-2 0.04 K 4. mom. mom.602176565e-19 J 1.127625306 5.519 K 1.

0 Hz 1.15 K.15 K.6867805e+25 m^-3 1.29371248e+17 E_h 1.061485968e+34 E_h 1.325 kPa) mag.986445684e-25 J 0.503476 m^-1 1.25663706144e-06 N A^-2 2.012 kg mol^-1 3.02214129e+26 u 5.15 K.5096582e+39 K 5.11265005605e-17 kg 9. Release 0.0rc1 Table 4.356392608e+50 Hz 4.03411701e+24 m^-1 7.8359787e+14 Hz V^-1 6700535850.55633525276e-08 E_h 299792458.135667516e-15 eV 1. 101.4398216689e-24 u 4.2429716e+22 K 1.205883301e-05 m^3 mol^-1 Continued on next page 4.60958885e+35 eV 2.98755178737e+16 J 6.0 Hz 69. Constants (scipy.536179e-40 kg 6.33564095198e-09 m^-1 6.2510868e-14 u 8.1 – continued from previous page hertz-atomic mass unit relationship hertz-electron volt relationship hertz-hartree relationship hertz-inverse meter relationship hertz-joule relationship hertz-kelvin relationship hertz-kilogram relationship inverse fine-structure constant inverse meter-atomic mass unit relationship inverse meter-electron volt relationship inverse meter-hartree relationship inverse meter-hertz relationship inverse meter-joule relationship inverse meter-kelvin relationship inverse meter-kilogram relationship inverse of conductance quantum Josephson constant joule-atomic mass unit relationship joule-electron volt relationship joule-hartree relationship joule-hertz relationship joule-inverse meter relationship joule-kelvin relationship joule-kilogram relationship kelvin-atomic mass unit relationship kelvin-electron volt relationship kelvin-hartree relationship kelvin-hertz relationship kelvin-inverse meter relationship kelvin-joule relationship kelvin-kilogram relationship kilogram-atomic mass unit relationship kilogram-electron volt relationship kilogram-hartree relationship kilogram-hertz relationship kilogram-inverse meter relationship kilogram-joule relationship kilogram-kelvin relationship lattice parameter of silicon Loschmidt constant (273.210218902e-42 kg 12906.10.01438777 K 2.0 u 6.1668114e-06 E_h 20836618000.3144621 J mol^-1 K^-1 0.431020504e-10 m 2.6516462e+25 m^-3 2.constants) 187 .035999074 1.37249668e-51 kg 137.325 kPa) molar volume of silicon 4.62606957e-34 J 4.509190311e+33 Hz 5.9903127176e-10 J s mol^-1 0.52443873e+41 m^-1 8.24150934e+18 eV 2.15 K.7992434e-11 K 7.SciPy Reference Guide.022710953 m^3 mol^-1 0.6173324e-05 eV 3.00209952e-13 m 8.3310250512e-15 u 1.519829846e-16 E_h 3. constant mag.4037217 ohm 4.119626565779 J m mol^-1 0. 100 kPa) Loschmidt constant (273.067833758e-15 Wb 1. 100 kPa) molar volume of ideal gas (273.022413968 m^3 mol^-1 1. flux quantum Mo x unit molar gas constant molar mass constant molar mass of carbon-12 molar Planck constant molar Planck constant times c molar volume of ideal gas (273.001 kg mol^-1 0.3806488e-23 J 1. 101.4.23984193e-06 eV 4.

0594649 1. mom.00137841917 0. anomaly muon mag.58211928e-16 eV s 8. mom.692833667e-11 J 105. ratio neutron-proton mass difference neutron-proton mass difference energy equivalent neutron-proton mass difference energy equivalent in MeV neutron-proton mass difference in u neutron-proton mass ratio neutron-tau mass ratio Newtonian constant of gravitation 1. Reference . ratio neutron gyromag.73092429e-22 kg m s^-1 0.0rc1 Table 4.6623647e-27 J T^-1 -0.68499694 0. mom.0 m s^-1 1. to nuclear magneton ratio muon mass muon mass energy equivalent muon mass energy equivalent in MeV muon mass in u muon molar mass muon-electron mass ratio muon-neutron mass ratio muon-proton mag.1646943 MHz T^-1 -9.52879 6.10.86159268e-13 m 9. mom.183345107 0.00104066882 1838.67384e-11 m^3 kg^-1 s^-2 Continued on next page 188 Chapter 4. Release 0.505349631e-10 J 939.68497934 2.1 – continued from previous page muon Compton wavelength muon Compton wavelength over 2 pi muon g factor muon mag.674927351e-27 kg 1.7682843 0. muon mag.883531475e-28 kg 1. ratio muon-proton mass ratio muon-tau mass ratio natural unit of action natural unit of action in eV s natural unit of energy natural unit of energy in MeV natural unit of length natural unit of mass natural unit of mom. ratio neutron-electron mag.00116592091 -0.3195909068e-15 m 2. mom.1124545177 -3. mom.um natural unit of mom.001008664916 kg mol^-1 -0.00104187563 -1.510998928 MeV 3.6583715 MeV 0.0721465e-13 1.SciPy Reference Guide. to nuclear magneton ratio neutron mass neutron mass energy equivalent neutron mass energy equivalent in MeV neutron mass in u neutron molar mass neutron to shielded proton mag. ratio neutron-electron mass ratio neutron-muon mass ratio neutron-proton mag.00138844919 1. mom.89059697 1.30557392e-30 2.510998928 MeV/c 1. mom. ratio over 2 pi neutron mag.82608545 183247179. to Bohr magneton ratio neutron mag.892484 -0.00484197044 -8. neutron mag.1134289267 u 0.1126095272 0. mom.49044807e-26 J T^-1 0. mom.18710506e-14 J 0.173444103e-14 m 1.28808866833e-21 s 299792458.6836605 8.867594294e-15 m -2.1001941568e-16 m -3.0001134289267 kg mol^-1 206.0 s^-1 T^-1 29.565379 MeV 1. mom.10938291e-31 kg 2.008664916 u 0.29333217 0.0023318418 -4.91304272 1.054571726e-34 J s 6. to Bohr magneton ratio muon mag.um in MeV/c natural unit of time natural unit of velocity neutron Compton wavelength neutron Compton wavelength over 2 pi neutron g factor neutron gyromag.

0007273895104 m^2 s^-1 10973731.00152103221 2.503277484e-10 J 938. mom.775e-16 m 1836. ratio proton gyromag.5774806 MHz T^-1 1. mom.8 C kg^-1 1.1524512605e-08 eV T^-1 0. to Bohr magneton ratio shielded helion mag. to nuclear magneton ratio shielded helion to proton mag. ratio 6. ratio proton-neutron mass ratio proton-tau mass ratio quantum of circulation quantum of circulation times 2 Rydberg constant Rydberg constant times c in Hz Rydberg constant times hc in eV Rydberg constant times hc in J Sackur-Tetrode constant (1 K. shielded helion mag.62606957e-34 J s 4. ratio shielded helion gyromag.05078353e-27 J T^-1 3.39106e-44 s 95788335.325 kPa) second radiation constant shielded helion gyromag. 101.220932e+19 GeV 1.02542623527 m^-1 T^-1 0.01438777 m K 203789465.5685 m^-1 3.62259357 MHz T^-1 6.410606743e-26 J T^-1 0. to nuclear magneton ratio proton mag. Constants (scipy.074553044e-26 J T^-1 -0.99862347826 0.001158671471 -2.0003636947552 m^2 s^-1 0.1648708 0.272046 MeV 1.10.5694e-05 1. mom.9 s^-1 T^-1 32.528063 0.4.15267245 8.SciPy Reference Guide.43410084 MHz T^-1 -1.135667516e-15 eV s 1. proton mag.672621777e-27 kg 1.00100727646681 kg mol^-1 8.127497718 -0.3269718 MeV fm 1. shielding correction proton mass proton mass energy equivalent proton mass energy equivalent in MeV proton mass in u proton molar mass proton rms charge radius proton-electron mass ratio proton-muon mass ratio proton-neutron mag.1 – continued from previous page Newtonian constant of gravitation over h-bar c nuclear magneton nuclear magneton in eV/T nuclear magneton in inverse meters per tesla nuclear magneton in K/T nuclear magneton in MHz/T Planck constant Planck constant in eV s Planck constant over 2 pi Planck constant over 2 pi in eV s Planck constant over 2 pi times c in MeV fm Planck length Planck mass Planck mass energy equivalent in GeV Planck temperature Planck time proton charge to mass quotient proton Compton wavelength proton Compton wavelength over 2 pi proton g factor proton gyromag.88024331 -1. 100 kPa) Sackur-Tetrode constant (1 K. to Bohr magneton ratio proton mag.1030891047e-16 m 5.00727646681 u 0.32140985623e-15 m 2.792847356 2. mom.0rc1 Table 4.616199e-35 m 2.5 s^-1 T^-1 42. mom. mom.60569253 eV 2.054571726e-34 J s 6.70837e-39 (GeV/c^2)^-2 5.constants) 189 .585694713 267522200. Release 0.17651e-08 kg 1.761766558 Continued on next page 4.00036582682 K T^-1 7. mom.179872171e-18 J -1.45989806 0. ratio over 2 pi proton mag.58211928e-16 eV s 197.1517078 -1. ratio over 2 pi shielded helion mag. mom.28984196036e+15 Hz 13.416833e+32 K 5.

to nuclear magneton ratio speed of light in vacuum standard acceleration of gravity standard atmosphere standard-state pressure Stefan-Boltzmann constant tau Compton wavelength tau Compton wavelength over 2 pi tau mass tau mass energy equivalent tau mass energy equivalent in MeV tau mass in u tau molar mass tau-electron mass ratio tau-muon mass ratio tau-neutron mass ratio tau-proton mass ratio Thomson cross section triton g factor triton mag.660538921e-27 kg 25812.670373e-08 W m^-2 K^-4 6.84678e-10 J 1776.504609447e-26 J T^-1 0.89111 1.9937170308 1. mom. mom. Reference .16747e-27 kg 2.0 m s^-1 9.00190749 kg mol^-1 3477.978962448 5.0073563e-27 kg 4.90749 u 0.82 MeV 1. to Bohr magneton ratio shielded proton mag.0030155007134 kg mol^-1 5496.920155714e-10 m 190 Chapter 4. shielded proton mag.10. to nuclear magneton ratio triton mass triton mass energy equivalent triton mass energy equivalent in MeV triton mass in u triton molar mass triton-electron mass ratio triton-proton mass ratio unified atomic mass unit von Klitzing constant weak mixing angle Wien frequency displacement law constant Wien wavelength displacement law constant {220} lattice spacing of silicon -0. Release 0.001520993128 2.8 s^-1 T^-1 42. mom.80665 m s^-2 101325.8074434 ohm 0. to Bohr magneton ratio triton mag.SciPy Reference Guide.410570499e-26 J T^-1 0.0 Pa 100000. ratio over 2 pi shielded proton mag.0 Pa 5. ratio shielded proton gyromag.0rc1 Table 4.97787e-16 m 1.1 – continued from previous page shielded helion to shielded proton mag.792775598 299792458.0155007134 u 0.50038741e-10 J 2808.921005 MeV 3. mom.957924896 1.8167 1.0028977721 m K 1. mom.9215267 2.652458734e-29 m^2 5. mom.0 Hz K^-1 0. ratio shielded proton gyromag.7617861313 267515326.5763866 MHz T^-1 1.89372 6.11056e-16 m 3.15 16. mom.001622393657 2. triton mag.2223 58789254000.

0rc1 4.constants) 191 .4.10.4.SciPy Reference Guide. Constants (scipy. Release 0.3 Units SI preﬁxes yotta zetta exa peta tera giga mega kilo hecto deka deci centi milli micro nano pico femto atto zepto 1024 1021 1018 1015 1012 109 106 103 102 101 10−1 10−2 10−3 10−6 10−9 10−12 10−15 10−18 10−21 Binary preﬁxes kibi mebi gibi tebi pebi exbi zebi yobi Weight gram metric_ton grain lb oz stone grain long_ton short_ton troy_ounce troy_pound carat m_u 10−3 kg 103 kg one grain in kg one pound (avoirdupous) in kg one ounce in kg one stone in kg one grain in kg one long ton in kg one short ton in kg one Troy ounce in kg one Troy pound in kg one carat in kg atomic mass constant (in kg) 210 220 230 240 250 260 270 280 4.

0rc1 Angle degree arcmin arcsec Time minute hour day week year Julian_year Length inch foot yard mile mil pt survey_foot survey_mile nautical_mile fermi angstrom micron au light_year parsec Pressure atm bar torr psi Area hectare acre one hectare in square meters one acre in square meters standard atmosphere in pascals one bar in pascals one torr (mmHg) in pascals one psi in pascals one inch in meters one foot in meters one yard in meters one mile in meters one mil in meters one point in meters one survey foot in meters one survey mile in meters one nautical mile in meters one Fermi in meters one Angstrom in meters one micron in meters one astronomical unit in meters one light year in meters one parsec in meters one minute in seconds one hour in seconds one day in seconds one week in seconds one year (365 days) in seconds one Julian year (365.SciPy Reference Guide. Reference .10.25 days) in seconds degree in radians arc minute in radians arc second in radians 192 Chapter 4. Release 0.

Release 0. i.constants) 193 .4.. 313.constants.. 40.0rc1 Volume liter gallon gallon_imp fluid_ounce fluid_ounce_imp bbl Speed kmh mph mach knot kilometers per hour in meters per second miles per hour in meters per second one Mach (approx.SciPy Reference Guide.array([-40.constants.e. 1 atm) in meters per second one knot in meters per second one liter in cubic meters one gallon (US) in cubic meters one gallon (UK) in cubic meters one ﬂuid ounce (US) in cubic meters one ﬂuid ounce (UK) in cubic meters one barrel in cubic meters Temperature zero_Celsius degree_Fahrenheit C2K(C) K2C(K) F2C(F) C2F(C) F2K(F) K2F(K) zero of Celsius scale in Kelvin one Fahrenheit (only differences) in Kelvins Convert Celsius to Kelvin Convert Kelvin to Celsius Convert Fahrenheit to Celsius Convert Celsius to Fahrenheit Convert Fahrenheit to Kelvin Convert Kelvin to Fahrenheit scipy.constants import C2K >>> C2K(_np.15.constants.C2K(C) Convert Celsius to Kelvin Parameters C : array_like Celsius temperature(s) to be converted.K2C(K) Convert Kelvin to Celsius Parameters K : array_like 4. Returns K : ﬂoat or array of ﬂoats Equivalent Kelvin temperature(s). at 15 C. Examples >>> from scipy. (the absolute value of) temperature “absolute zero” as measured in Celsius.0])) array([ 233.15]) scipy. Constants (scipy. Notes Computes K = C + zero_Celsius where zero_Celsius = 273.10.15.

8. Returns C : ﬂoat or array of ﬂoats Equivalent Celsius temperature(s). Examples >>> from scipy. 40.constants import C2F >>> C2F(_np.constants. 4. 313. Notes Computes C = K . 40. Examples >>> from scipy. Notes Computes F = 1.44444444]) scipy.8 * C + 32.]) 194 Chapter 4.. Release 0..0rc1 Kelvin temperature(s) to be converted. Returns F : ﬂoat or array of ﬂoats Equivalent Fahrenheit temperature(s). .constants.zero_Celsius where zero_Celsius = 273.15])) array([-40.e.array([-40. 104.0])) array([-40. Reference . Notes Computes C = (F .constants. Returns C : ﬂoat or array of ﬂoats Equivalent Celsius temperature(s).15.32) / 1.F2C(F) Convert Fahrenheit to Celsius Parameters F : array_like Fahrenheit temperature(s) to be converted.]) scipy.15.array([-40. (the absolute value of) temperature “absolute zero” as measured in Celsius..constants. Examples >>> from scipy.constants.C2F(C) Convert Celsius to Fahrenheit Parameters C : array_like Celsius temperature(s) to be converted.SciPy Reference Guide. 40. i.array([233.constants import F2C >>> F2C(_np.constants import K2C >>> K2C(_np.0])) array([ -40.10.

Returns K : ﬂoat or array of ﬂoats Equivalent Kelvin temperature(s).constants import F2K >>> F2K(_np.15. 1956) in Joules one erg in Joules one British thermal unit (International Steam Table) in Joules one British thermal unit (thermochemical) in Joules one ton of TNT in Joules 4.constants. Release 0.array([233.8 + zero_Celsius where zero_Celsius = 273.array([-40. Examples >>> from scipy.SciPy Reference Guide.15])) array([ -40.32)/1.e.constants.15. i.15.]) Energy eV calorie calorie_IT erg Btu Btu_th ton_TNT one electron volt in Joules one calorie (thermochemical) in Joules one calorie (International Steam Table calorie. Constants (scipy. 313.0rc1 scipy.zero_Celsius) + 32 where zero_Celsius = 273.15.. (the absolute value of) temperature “absolute zero” as measured in Celsius.4.. i.constants) 195 .10. 104])) array([ 233. 104. (the absolute value of) temperature “absolute zero” as measured in Celsius.constants.e.constants import K2F >>> K2F(_np.F2K(F) Convert Fahrenheit to Kelvin Parameters F : array_like Fahrenheit temperature(s) to be converted.K2F(K) Convert Kelvin to Fahrenheit Parameters K : array_like Kelvin temperature(s) to be converted..constants.15]) scipy. Notes Computes F = 1. Examples >>> from scipy. Notes Computes K = (F .8 * (K . 313. Returns F : ﬂoat or array of ﬂoats Equivalent Fahrenheit temperature(s).

e.0.constants. Release 0.SciPy Reference Guide. Examples 196 Chapter 4.array((1.99792458e+08.0. i. Notes Computes nu = c / lambda where c = 299792458.00000000e+00]) scipy. Returns lambda : ﬂoat or array of ﬂoats Equivalent wavelength(s).e. Parameters nu : array_like Optical frequency to be converted. the (vacuum) speed of light in meters/second.nu2lambda(nu) Convert optical frequency to wavelength. Returns nu : ﬂoat or array of ﬂoats Equivalent optical frequency. Examples >>> from scipy.constants.0rc1 Power hp Force dyn lbf kgf Optics lambda2nu(lambda_) nu2lambda(nu) Convert wavelength to optical frequency Convert optical frequency to wavelength.. Notes Computes lambda = c / nu where c = 299792458. 1. i.constants. the (vacuum) speed of light in meters/second.constants import lambda2nu >>> lambda2nu(_np.. Reference .lambda2nu(lambda_) Convert wavelength to optical frequency Parameters lambda : array_like Wavelength(s) to be converted. one dyne in newtons one pound force in newtons one kilogram force in newtons one horsepower in watts scipy.10. speed_of_light))) array([ 2.

n.. axes. n : int. overwrite_x]) fft2(x[. axes.. overwrite_x]) Return discrete Fourier transform of real or complex sequence. n. the default is False. the default is over the last axis (i. If n < x. axis.99792458e+08. Returns z : complex ndarray with the elements: [y(0). overwrite_x]) ifftn(x[. axis.shape[axis].fftpack) 197 . overwrite_x=0) -> y Return the Discrete Cosine Transform of arbitrary type sequence x..y(1-n/2).. The returned complex array contains y(0).array((1. shape. axes. optional If True the contents of x can be destroyed. axis : int. axes.5 Discrete Fourier transforms (scipy. Return multi-dimensional discrete Fourier transform of x. axis. overwrite_x]) ifft2(x[.constants. optional Axis along which the fft’s are computed.arange(n)/n)).0rc1 >>> from scipy.y(-1)] if n is even [y(0). axis=-1.. shape. type. 2-D discrete inverse Fourier transform of real or complex sequence. speed_of_light))) array([ 2.y(-1)] if n is odd 4. overwrite_x]) dct(x[. irfft(x.fftpack. n...y((n-1)/2). norm. The default results in n = x.y(1). axis. axis=-1).00000000e+00]) 4. Parameters x : array_like Array to Fourier transform. axis. n. If n > x. x is zero-padded. overwrite_x]) rfft(x[. Return the Inverse Discrete Cosine Transform of an arbitrary type sequence. overwrite_x=0) Return discrete Fourier transform of real or complex sequence.y(-(n1)/2). shape... Return inverse multi-dimensional discrete Fourier transform of Discrete Fourier transform of a real sequence.. n=None..4 References 4.shape[axis]. overwrite_x : bool. n.....1 Fast Fourier Transforms (FFTs) fft(x[. Discrete Fourier transforms (scipy. overwrite_x]) fftn(x[. n=None. overwrite_x]) ifft(x[. y(1).5.constants import nu2lambda >>> nu2lambda(_np..SciPy Reference Guide.y(n/2).y(1). overwrite_x]) idct(x[. x is truncated. optional Length of the Fourier transform. y(n-1) where y(j) = (x * exp(-2*pi*sqrt(-1)*j*np.10.5.fftpack) 4. n.. shape. scipy.fft(x. 2-D discrete Fourier transform. norm. Release 0. Return discrete inverse Fourier transform of real or complex sequence. 1.shape[axis].e. type. axis.sum(). axis=-1.4. overwrite_x]) irfft(x[...

axis=-1). Return the two-dimensional discrete Fourier transform of the 2-D argument x. 2. y(n-1) where y(j) = (x * exp(2*pi*sqrt(-1)*j*np.. True scipy. optional Axis along which the ifft’s are computed. j = 0.10.shape[axis]. the default is False. optional If True the contents of x can be destroyed.arange(5) >>> np. -1]. For n even.shape[axis]. n : int.arange(n)/n)). the frequencies of the result are [ 0. 3.e. The default results in n = x. axis : int. For n even and x real.. -2.n-1] x[k] * exp(-sqrt(-1)*j*k* 2*pi/n). A[1:n/2+1] contains the positive-frequency terms. optional Length of the inverse Fourier transform. scipy..fftpack.mean().fftpack import fft. If n > x. 1. -1). If n < x. in order of decreasingly negative frequency.SciPy Reference Guide. shape=None. A[n/2] will always be real. then A[0] contains the zero-frequency term. axis=-1. atol=1e-15) #within numerical accuracy.allclose(fft(ifft(x)). Release 0.shape[axis]. x is truncated.fft2(x.. y(1). n). overwrite_x : bool. n=None. the default is over the last axis (i. 198 Chapter 4. Examples >>> from scipy.fftpack. Parameters x : array_like Transformed data to invert. This is most efﬁcient for n a power of two. So for an 8-point transform. overwrite_x=0) 2-D discrete Fourier transform. The returned complex array contains y(0). x. 4. -3.ifft(x.n-1 Note that y(-j) = y(n-j).0rc1 where y(j) = sum[k=0. axes=(-2. Reference ..conjugate().. overwrite_x=0) Return discrete inverse Fourier transform of real or complex sequence. ifft >>> x = np. A[n/2] contains the sum of the positive and negative-frequency terms. x is zero-padded. See Also: ifft Inverse FFT rfft FFT of a real sequence Notes The packing of the result is “standard”: If A = fft(a.. and A[n/2+1:] contains the negative-frequency terms.

. shape is x. the contents of x can be destroyed...conjugate(). .10.ifft2(x. -j_i. fftn(ifftn(y))) True 4.fftpack. n_i-j_i.. axes. axes=(-2. The returned array contains: y[j_1. See Also: fft2.j_d] = sum[k_1=0. ifft scipy. optional = y[. optional The axes of x (y if shape is not None) along which the transform is applied. . the i-th dimension is padded with zeros. Release 0.shape[i].take(x.. shape : tuple of ints. -1). See ifft for more information. axes=None.n_1-1. If shape[i] < x. then shape is scipy...shape... Default is False. shape=None.... If shape[i] > x.. The shape of the result..np...shape[i]. overwrite_x=0) 2-D discrete inverse Fourier transform of real or complex sequence.SciPy Reference Guide.arange(16). Note that y[.0rc1 See Also: fftn for detailed information.arange(16).shape) and n = x. k_d=0.shape.allclose(y..5. scipy.fftpack..fftpack) 199 . axes : array_like of ints. overwrite_x : bool.shape.. np.fftn(x.] . If both shape and axes (see below) are None. Returns y : complex-valued n-dimensional numpy array The (n-dimensional) DFT of the input array.d] exp(-sqrt(-1)*2*pi/n_i * j_i * k_i) where d = len(x. Return inverse two-dimensional discrete Fourier transform of arbitrary type sequence x.k_d] * prod[i=1. if shape is None but axes is not None. See Also: ifftn Examples >>> y = (-np. 8 . overwrite_x=0) Return multi-dimensional discrete Fourier transform of x. Parameters x : array_like The (n-dimensional) array to transform. axis=0).arange(16)) >>> np.]. shape=None. optional If True. Discrete Fourier transforms (scipy. the i-th dimension is truncated to length shape[i].n_d-1] x[k_1....

n = x. Reference .d] n_i... overwrite_x=0) irfft(x. the contents of x can be overwritten.conjugate().shape. Parameters x : array_like. overwrite_x=0) Discrete Fourier transform of a real sequence.. scipy.. The returned array contains: y[j_1. n=None.. If n is not speciﬁed (the default) then n = x. irfft.fftpack.ifftn(x.fftpack.d] exp(sqrt(-1)*2*pi/n_i * j_i * k_i) where d = len(x.shape).Re(y(n/2))] [y(0).0rc1 scipy.... k_d=0.. overwrite_x : bool. ..Im(y(n/2))] if n is even if n is odd where y(j) = sum[k=0. optional If set to true. axis : int. The contents of x is interpreted as the output of rfft(.n_d-1] x[k_1..irfft(x.) function. Default is False. and p = prod[i=1... Release 0. optional The axis along which the transform is applied.rfft(x. if n > x.Re(y(n/2)).n_1-1. axis=-1. y == rfft(irfft(y)).Re(y(1)).shape[axis].n-1 Note that y(-j) == y(n-j). scipy..Im(y(1)). x is zero-padded...SciPy Reference Guide.shape[axis]... The default is the last axis.basic Notes Within numerical accuracy. If n < x.Im(y(1)). overwrite_x=0) -> y Return inverse discrete Fourier transform of real sequence x..fftpack. axis=-1.. 200 Chapter 4.10.. axis=-1.shape[axis]. scipy. For description of parameters see fftn.fftpack. n=None.k_d] * prod[i=1.n-1] x[k] * exp(-sqrt(-1)*j*k*2*pi/n) j = 0. n : int. overwrite_x=0) Return inverse multi-dimensional discrete Fourier transform of arbitrary type sequence x. optional Deﬁnes the length of the Fourier transform.. The returned real arrays contains: [y(0). See Also: fft.Re(y(1)). real-valued The data to tranform.. x is truncated. shape=None.. axes=None. See Also: fftn for detailed information.j_d] = 1/p * sum[k_1=0. n=None.

. overwrite_x=0) Return the Discrete Cosine Transform of arbitrary type sequence x. optional Axis over which to compute the transform. + x[0] + (-1)**(j) x[n-1]) and for n is odd y(j) = 1/n (sum[k=1.n/2-1] (x[2*k-1]+sqrt(-1)*x[2*k]) • exp(sqrt(-1)*j*k* 2*pi/n) • c.fftpack) 201 . axis : int..y(n-1)] where for n is even y(j) = 1/n (sum[k=1. optional If True the contents of x can be destroyed. overwrite_x : bool. optional Length of the transform. norm : {None.c.c. n=None. 2. type : {1. Parameters x : array_like The input array.0rc1 The returned real array contains [y(0). Release 0. Default is None.__doc__ scipy.5. + x[0]) c.fftpack. optional Normalization mode (see Notes).. ‘ortho’}. type=2.10. optional Type of the DCT (see Notes)..c. norm=None.. n : int.y(1). denotes complex conjugate of preceeding expression. Discrete Fourier transforms (scipy. 3}. axis=-1.SciPy Reference Guide.. Optional input: see rfft. Default type is 2.dct(x.(n-1)/2] (x[2*k-1]+sqrt(-1)*x[2*k]) • exp(sqrt(-1)*j*k* 2*pi/n) • c. See Also: idct 4. (default=False) Returns y : ndarray of real The transformed input array.

1980. n=None.wikipedia. 28(1). optional Type of the DCT (see Notes). f = sqrt(1/(2*N)) otherwise.org/10. 27-34. speech and signal processing vol. axis=-1. Which makes the corresponding matrix of coefﬁcients orthonormal (OO’ = Id). for norm=’ortho’ and 0 <= k < N: N-1 y[k] = x[0] / sqrt(N) + sqrt(1/N) * sum x[n]*cos(pi*(k+0. There are theoretically 8 types of the DCT. http://dx. There are several deﬁnitions. norm=None.10. n=0 If norm=’ortho’. IEEE Transactions on acoustics. optional 202 Chapter 4. dct(x.idct(x. n=1 or. only the ﬁrst 3 types are implemented in scipy. Makhoul. we use the following (for norm=None): N-1 y[k] = x[0] + 2 * sum x[n]*cos(pi*(k+0. by J. overwrite_x=0) Return the Inverse Discrete Cosine Transform of an arbitrary type sequence.5)*n/N) n=1 The (unnormalized) DCT-III is the inverse of the (unnormalized) DCT-II. Default type is 2. 2. up to a factor 2N. 0 <= k < N. Reference . 0 <= k < N. Release 0. we use the following (for norm=None): N-1 y[k] = 2* sum x[n]*cos(pi*k*(2n+1)/(2*N)).0rc1 Notes For a single dimension array x. scipy.doi.5)*n/N). y[k] is multiplied by a scaling factor f : f = sqrt(1/(4*N)) if k = 0.SciPy Reference Guide. Parameters x : array_like The input array.org/wiki/Discrete_cosine_transform ‘A Fast Cosine Transform in One and Two Dimensions’. Note also that the DCT-I is only supported for input size > 1 There are several deﬁnitions of the DCT-II. The orthonormalized DCT-III is exactly the inverse of the orthonormalized DCT-II. type=2. There are several deﬁnitions of the DCT-I. pp. ‘The’ DCT generally refers to DCT type 2. 3}.1109/TASSP. References http://en. type : {1. norm=’ortho’) is equal to MATLAB dct(x). n : int. and ‘the’ Inverse DCT generally refers to DCT type 3. we use the following (for norm=None): N-2 y[k] = x[0] + (-1)**k x[N-1] + 2 * sum x[n]*cos(pi*k*n/(N-1)) n=1 Only None is supported as normalization mode for DCT-I.1163351 (1980).fftpack.

axis : int. overwrite_x : bool. which is the same as DCT of type 3.0rc1 Length of the transform. For the deﬁnition of these types. idct(x. h. Default is None. optional Normalization mode (see Notes).SciPy Reference Guide. period=2*pi) -> y hilbert(x) -> y ihilbert(x) -> y cs_diff(x. IDCT of type 1 is the DCT of type 1. see dct. then integration is carried out under the assumption that x_0==0.b)-sinh/cosh pseudo-derivative of a periodic sequence x. a. period. period The assumed period of the sequence. order=1. period=2*pi) -> y shift(x. a. optional If True the contents of x can be destroyed. a. b. then y_j = pow(sqrt(-1)*j*2*pi/period.5. a. b. a. period=2*pi) -> y Return (a. b[. Discrete Fourier transforms (scipy. period. period=2*pi) -> y tilbert(x. and IDCT of type 3 is the DCT of type 2.fftpack. (default=False) Returns y : ndarray of real The transformed input array. If x_j and y_j are Fourier coefﬁcients of periodic functions x and y. _cache={}) diff(x. a[. period. period=2*pi) -> y Return k-th derivative (or integral) of a periodic sequence x. order. order=1. ‘The’ IDCT is the IDCT of type 2. a. _cache]) ss_diff(x. norm : {None. If order is negative. h[. b.2 Differential and pseudo-differential operators diff(x[.10. _cache]) shift(x. Default is 2*pi. b[. period=2*pi) -> y itilbert(x.diff(x. period. h. 4.5. 4. _cache]) hilbert(x[. b[. h[. Release 0. a. _cache]) cc_diff(x. period. _cache]) sc_diff(x. _cache]) tilbert(x. _cache]) diff(x. optional Axis over which to compute the transform. ‘ortho’}. order) * x_j y_0 = 0 if order is not 0. period=2*pi) -> y cc_diff(x. ss_diff(x. period=2*pi) -> y scipy. respectively. period. period. b[. Optional input: order The order of differentiation. a. period. Default order is 1. norm=’ortho’) is equal to MATLAB idct(x).fftpack) 203 . period=None. order=1. IDCT of type 2 is the DCT of type 3. See Also: dct Notes For a single dimension array x. _cache]) ihilbert(x) cs_diff(x. _cache]) itilbert(x.

-k)==x (within numerical accuracy) For odd order and even len(x). period The assumed period of the sequence. Release 0.fftpack.fftpack. respectively. the Nyquist mode is taken zero. If x_j and y_j are Fourier coefﬁcients of periodic functions x and y. respectively. then y_j = -sqrt(-1)*tanh(j*h*2*pi/period) * x_j y_0 = 0 Optional input: see tilbert. 204 Chapter 4. should be periodic.k).itilbert(x. period=None. h.hilbert(x. _cache={}) hilbert(x) -> y Return Hilbert transform of a periodic sequence x. Notes: If sum(x. scipy. period=2*pi) -> y Return inverse h-Tilbert transform of a periodic sequence x.axis=0)==0 and n=len(x) is odd then tilbert(itilbert(x)) == x If 2*pi*h/period is approximately 10 or larger then numerically tilbert == hilbert (theoretically oo-Tilbert == Hilbert). scipy. h. h. If x_j and y_j are Fourier coefﬁcients of periodic functions x and y. the Nyquist mode of x is taken zero.tilbert(x.SciPy Reference Guide. period=None. respectively. If x_j and y_j are Fourier coefﬁcients of periodic functions x and y. _cache : dict. then y_j = sqrt(-1)*coth(j*h*2*pi/period) * x_j y_0 = 0 Input: h Deﬁnes the parameter of the Tilbert transform. For even len(x).__doc__ scipy. period=2*pi) -> y Return h-Tilbert transform of a periodic sequence x. _cache={}) itilbert(x.fftpack.0rc1 Notes: If sum(x. Default period is 2*pi. Reference . then y_j = sqrt(-1)*sign(j) * x_j y_0 = 0 Parameters x : array_like The input array.axis=0)=0 then diff(diff(x. optional Dictionary that contains the kernel used to do a convolution with. h.10. _cache={}) tilbert(x.

_cache={}) cs_diff(x. then: y_j = sqrt(-1)*sinh(j*a*2*pi/period)/cosh(j*b*2*pi/period) * x_j y_0 = 0 Parameters x : array_like Input array. Discrete Fourier transforms (scipy. a.b)-cosh/sinh pseudo-derivative of a periodic sequence x. a. If x_j and y_j are Fourier coefﬁcients of periodic functions x and y.b)-sinh/cosh pseudo-derivative of a periodic sequence x. then y_j = -sqrt(-1)*cosh(j*a*2*pi/period)/sinh(j*b*2*pi/period) * x_j y_0 = 0 Input: a.SciPy Reference Guide. Default is 2*pi. period : ﬂoat.0rc1 Returns y : ndarray The transformed input. period=None. The sign of the returned transform does not have a factor -1 that is more often than not found in the deﬁnition of the Hilbert transform. the Nyquist mode of x is taken zero.ihilbert(x) ihilbert(x) -> y Return inverse Hilbert transform of a periodic sequence x. For even len(x).cs_diff(x.sc_diff(x. b.hilbert does have an extra -1 factor compared to this function. Notes: For even len(x). a. a. scipy. period=2*pi) -> y Return (a.fftpack. Notes If sum(x. b.fftpack.10. scipy.fftpack) 205 .fftpack. If x_j and y_j are Fourier coefﬁcients of periodic functions x and y. respectively. Default period is 2*pi. Release 0. If x_j and y_j are Fourier coefﬁcients of periodic functions x and y.b : ﬂoat Deﬁnes the parameters of the sinh/cosh pseudo-differential operator. Note also that scipy. b. respectively. period The period of the sequence.5. _cache={}) Return (a. then y_j = -sqrt(-1)*sign(j) * x_j y_0 = 0 scipy. 4. optional The period of the sequence x.signal.b Deﬁnes the parameters of the cosh/sinh pseudo-differential operator. respectively. the Nyquist mode of x is taken zero. period=None. axis=0) == 0 then hilbert(ihilbert(x)) == x.

fftpack.shift(x. b.SciPy Reference Guide.10.a.b Deﬁnes the parameters of the sinh/sinh pseudo-differential operator. then y_j = sinh(j*a*2*pi/period)/sinh(j*b*2*pi/period) * x_j y_0 = a/b * x_0 Input: a. Default period is 2*pi. _cache={}) ss_diff(x. Notes: cc_diff(cc_diff(x. _cache={}) shift(x. b. a.fftpack. a. a. _cache={}) cc_diff(x. If x_j and y_j are Fourier coefﬁcients of periodic functions x and y.a.a) == x scipy. respectively.b). a. a.a) == x For even len(x). period=2*pi) -> y Return (a. period=None. period=None. period=2*pi) -> y Shift periodic sequence x by a: y(u) = x(u+a).cc_diff(x.b.b. scipy.fftpack. period=2*pi) -> y Return (a.b).b).0rc1 Notes sc_diff(cs_diff(x. Notes: ss_diff(ss_diff(x.b. respectively. Default is 2*pi. period The period of the sequence x.a) == x scipy.a. a. respectively. If x_j and y_j are Fourier coefﬁcients of periodic functions x and y.b)-sinh/sinh pseudo-derivative of a periodic sequence x.ss_diff(x. the Nyquist mode of x is taken as zero.b Deﬁnes the parameters of the sinh/sinh pseudo-differential operator. b. 206 Chapter 4. b. Default is 2*pi. then y_j = exp(j*a*2*pi/period*sqrt(-1)) * x_f Optional input: period The period of the sequences x and y. Reference .b)-cosh/cosh pseudo-derivative of a periodic sequence x. Release 0. Optional input: period The period of the sequence x. period=None. If x_j and y_j are Fourier coefﬁcients of periodic functions x and y. then y_j = cosh(j*a*2*pi/period)/cosh(j*b*2*pi/period) * x_j Input: a.

axes=None) Shift the zero-frequency component to the center of the spectrum.reshape(3... Examples >>> freqs = np. axes=(1. 0. The inverse of fftshift. [-3...fftshift(freqs. Release 0.. irfft). 1.. 4.0rc1 4. -2.fft. d]) rfftfreq(n[. -2.fft.. Returns y : ndarray The shifted array.. Parameters x : array_like Input array. This function swaps half-spaces for all axes listed (defaults to all).. -3. 3.fft..... Note that y[0] is the Nyquist component only if len(x) is even. 3. axes]) fftfreq(n[.....fftpack. axes : int or shape tuple.]]) scipy.fftshift(freqs) array([-5.. 1.]./9).SciPy Reference Guide.fft. -3.]. -4.. 2.]) >>> np. 4. -5. 0. -3. -2. Parameters x : array_like Input array. -1.. 2. -1. 0. 2. DFT sample frequencies (for usage with rfft. 1.5. 3) >>> freqs array([[ 0..].. optional 4. d]) Shift the zero-frequency component to the center of the spectrum. Discrete Fourier transforms (scipy. See Also: ifftshift The inverse of fftshift.3 Helper functions fftshift(x[..]) Shift the zero-frequency component only along the second axis: >>> freqs = np.5..]. scipy.fftpack..10. axes : int or shape tuple.fftfreq(9. optional Axes over which to shift. 1. -1..fftfreq(10. 3. [-1.)) array([[ 2.1) >>> freqs array([ 0. [ 3.. 4.]]) >>> np. axes]) ifftshift(x[.ifftshift(x. Default is None. d=1.. -4..fftpack) 207 .. Return the Discrete Fourier Transform sample frequencies.fftshift(x. axes=None) The inverse of fftshift. -2. 4. -4. which shifts all axes. [-4.

2. 4. [ 3. [ 3.25]) scipy. d=1. Defaults to None. containing the sample frequencies..0) DFT sample frequencies (for usage with rfft. 3. -1] / (d*n) if n is even if n is odd Parameters n : int Window length.]. . n/2-1..5 .0) Return the Discrete Fourier Transform sample frequencies. . -1. 1. 2. Returns y : ndarray The shifted array. See Also: fftshift Shift zero-frequency component to the center of the spectrum.fftpack.]]) scipy. irfft).. 1. Returns out : ndarray The array of length n.. Reference . The returned ﬂoat array contains the frequency bins in cycles/unit (with zero at the start) given a window length n and a sample spacing d: f = [0..reshape(3.. 1. [-3. 8. 6...10.ifftshift(np.fft..].rfftfreq(n. Examples >>> signal = np.]]) >>> np.75. d=1.. 5]. 0. [-3.fft(signal) >>> n = signal..fftpack. d=timestep) >>> freq array([ 0.fft.fft. d=1. . -2.25.. . .. d : scalar Sample spacing.SciPy Reference Guide...fft...75. which shifts all axes.].1 >>> freq = np. dtype=float) >>> fourier = np. -2.5 . Release 0.. -1. -1. -1] / (d*n) f = [0. -(n-1)/2... Examples >>> freqs = np.. -3.fftfreq(n. 4. -2. (n-1)/2. -n/2.array([-2.fft.. The returned ﬂoat array contains the frequency bins in cycles/unit (with zero at the start) given a window length n and a sample spacing d: 208 Chapter 4. -4. 1..].fftfreq(9. -4. 3. 3) >>> freqs array([[ 0.fftfreq(n. .0rc1 Axes over which to calculate. -5. 2.. 1.fftshift(freqs)) array([[ 0./9). 4. 1.size >>> timestep = 0.

fftpack.1.Function signature: y = convolve(x.zero_nyquist.10.n/2-1.fftpack.fftpack.n/2]/(d*n) if n is odd 4.. Release 0.convolve.[overwrite_x]) Required arguments: x : input rank-1 array(‘d’) with bounds (n) omega_real : input rank-1 array(‘d’) with bounds (n) omega_imag : input rank-1 array(‘d’) with bounds (n) Optional arguments: overwrite_x := 0 input int Return objects: y : rank-1 array(‘d’) with bounds (n) and x storage scipy..Function signature: y = convolve_z(x.n/2-1.n/2]/(d*n) if n is even f = [0.convolve) convolve convolve_z init_convolution_kernel destroy_convolve_cache convolve .0rc1 f = [0.omega_real.1.2.n/2-1.[swap_real_imag.fftpack) 209 .Function signature: convolve_z .convolve_z convolve_z .Function signature: omega = init_convolution_kernel(n...omega.overwrite_x]) Required arguments: x : input rank-1 array(‘d’) with bounds (n) omega : input rank-1 array(‘d’) with bounds (n) Optional arguments: overwrite_x := 0 input int swap_real_imag := 0 input int Return objects: y : rank-1 array(‘d’) with bounds (n) and x storage scipy.Function signature: destroy_convolve_cache .kernel_func.convolve.2.1.fftpack.[d..kernel_func_extra_args]) Required arguments: n : input int kernel_func : call-back function Optional arguments: d := 0 input int kernel_func_extra_args := () input tuple zero_nyquist := d%2 input int Return objects: omega : rank-1 array(‘d’) with bounds (n) Call-back functions: def kernel_func(k): return kernel_func Required arguments: k : input int 4.init_convolution_kernel init_convolution_kernel .SciPy Reference Guide.2.Function signature: init_convolution_kernel .n/2..convolve convolve .5.omega_imag. Discrete Fourier transforms (scipy...4 Convolutions (scipy.Function signature: scipy.convolve.5.2.n/2-1.1.

Function signature: y = zrfft(x.Function signature: destroy_convolve_cache() 4. Reference .[n.[n._fftpack.zfft zfft .Function signature: destroy_zfft_cache .direction.Function signature: scipy.normalize.overwrite_x]) Required arguments: x : input rank-1 array(‘D’) with bounds (*) Optional arguments: overwrite_x := 0 input int n := size(x) input int direction := 1 input int normalize := (direction<0) input int Return objects: y : rank-1 array(‘D’) with bounds (*) and x storage scipy.fftpack.direction._fftpack) drfft zfft zrfft zfftnd destroy_drfft_cache destroy_zfft_cache destroy_zfftnd_cache drfft .Function signature: zfft .Function signature: zfftnd .fftpack.[n.convolve.overwrite_x]) Required arguments: x : input rank-1 array(‘d’) with bounds (*) Optional arguments: overwrite_x := 0 input int n := size(x) input int direction := 1 input int normalize := (direction<0) input int Return objects: y : rank-1 array(‘d’) with bounds (*) and x storage scipy.Function signature: destroy_drfft_cache .drfft drfft .destroy_convolve_cache destroy_convolve_cache .0rc1 Return objects: kernel_func : ﬂoat scipy. Release 0.Function signature: y = drfft(x._fftpack.zrfft zrfft .Function signature: y = zfft(x.fftpack.SciPy Reference Guide.direction.5 Other (scipy.5.Function signature: zrfft .normalize.fftpack.overwrite_x]) Required arguments: x : input rank-1 array(‘D’) with bounds (*) Optional arguments: overwrite_x := 1 input int n := size(x) input int direction := 1 input int normalize := (direction<0) input int Return objects: y : rank-1 array(‘D’) with bounds (*) and x storage 210 Chapter 4._fftpack.normalize.Function signature: destroy_zfftnd_cache .10.fftpack.

fftpack.destroy_drfft_cache destroy_drfft_cache ._fftpack.integrate) 211 . b.[s.destroy_zfft_cache destroy_zfft_cache .]) tplquad(func. limlst=50) Compute a deﬁnite integral. args=().fftpack. b : ﬂoat Upper limit of integration (use scipy. tol.overwrite_x]) Required arguments: x : input rank-1 array(‘D’) with bounds (*) Optional arguments: overwrite_x := 0 input int s := old_shape(x. b[.direction..integrate) 4. full_output.. a. qfun. given function object quad(func. a. points=None.quad_explain() for more information on the more esoteric inputs and outputs. gfun.fftpack. a.6 Integration and ODEs (scipy._fftpack.. .Function signature: destroy_drfft_cache() scipy. Use the keyword argument args to pass the other arguments.Function signature: y = zfftnd(x.. If func takes many arguments.Function signature: destroy_zfftnd_cache() 4.Function signature: destroy_zfft_cache() scipy. b[. a. args.]) dblquad(func. maxp1=50. a. args.integrate. . Romberg integration of a callable function or method. args.. . args. Compute a double integral. 4. Run scipy. Compute a deﬁnite integral using ﬁxed-tolerance Gaussian quadrature.10. wopts=None. gfun.j++) input rank-1 array(‘i’) with bounds (r) direction := 1 input int normalize := (direction<0) input int Return objects: y : rank-1 array(‘D’) with bounds (*) and x storage scipy. Compute a deﬁnite integral using ﬁxed-order Gaussian quadrature.4899999999999999e-08. hfun... rtol. Parameters func : function A Python function or method to integrate. rfun) fixed_quad(func.normalize.quad(func. epsabs=1. a. b. .6. wvar=None. a. tol. b[._fftpack. Compute a triple (deﬁnite) integral. Release 0.]) Compute a deﬁnite integral.6. limit=50.Inf for -inﬁnity). b[. epsrel=1. a : ﬂoat Lower limit of integration (use -scipy. args. hfun[. scipy. n]) quadrature(func.1 Integrating functions.SciPy Reference Guide._fftpack. Integrate func from a to b (possibly inﬁnite interval) using a technique from the Fortran library QUADPACK.integrate. Integration and ODEs (scipy.Inf for +inﬁnity).. b. full_output=0.4899999999999999e-08. rtol.0rc1 scipy.zfftnd zfftnd . weight=None.integrate. it is integrated along the axis corresponding to the ﬁrst argument.integrate.destroy_zfftnd_cache destroy_zfftnd_cache .]) romberg(function.fftpack.

quad_explain() for more information.g. discontinuities).SciPy Reference Guide. Reference . Release 0.integrate. maxp1 : : 212 Chapter 4. limit : : an upper bound on the number of subintervals used in the adaptive algorithm. limlst : : Upper bound on the number of cylces (>=3) for use with a sinusoidal weighting and an inﬁnite end-point. warning messages are also suppressed and the message is appended to the output tuple. abserr : ﬂoat an estimate of the absolute error in the result. Returns y : ﬂoat The integral of func from a to b. infodict : dict a dictionary containing additional information. If non-zero. weight : : string indicating weighting function.10.0rc1 args : tuple. optional extra arguments to pass to func full_output : int Non-zero to return a dictionary of integration information. The sequence does not have to be sorted. it contains an explanation of the codes in infodict[’ierlst’] Other Parameters epsabs : : absolute error tolerance. epsrel : : relative error tolerance. points : : a sequence of break points in the bounded integration interval where local difﬁculties of the integrand may occur (e. Run scipy. singularities. explain : : appended only with ‘cos’ or ‘sin’ weighting and inﬁnite integration limits. wvar : : variables for use with weighting functions.. wopts : : Optional input for reusing Chebyshev moments. message : : a convergence message.

integrate. epsrel=1.hfun(x).4. args=(1. err = integrate.integrate) 213 .6.. romb scipy. ode.x) from x=a.3333333333 Calculate ∞ −x e dx 0 >>> invexp = lambda x: exp(-x) >>> integrate. Parameters func : callable A Python function or method of at least two variables: y must be the ﬁrst argument and x the second argument.dblquad(func. Integration and ODEs (scipy. See Also: dblquad.) (21. b.quad(invexp.. tplquad fixed_quad ﬁxed-order Gaussian quadrature quadrature adaptive Gaussian quadrature odeint. hfun.5 >>> >>> 1.0.**3/3 21. 5.333333333333332.a : a*x y. a.8426061711142159e-11) >>> >>> >>> 0. simps. 1..SciPy Reference Guide. 0.quad(f. 0. err = integrate.quad(f. gfun.b) : tuple The limits of integration in x: a < b gfun : callable The lower boundary curve in y which is a function taking a single ﬂoating point argument (x) and returning a ﬂoating point result: a lambda function can be useful here.0. 4. 1. args=(3.4899999999999999e-08) Compute a double integral. Release 0.quad(x2. epsabs=1. trapz.b and y=gfun(x).4899999999999999e-08. (a.)) y y.10.special for coefﬁcients and roots of orthogonal polynomials Examples Calculate 4 0 x2 dx and compare with an analytic result >>> from scipy import integrate >>> x2 = lambda x: x**2 >>> integrate. 2.)) y scipy.inf) (0.99999999999999989. Return the double (deﬁnite) integral of func(y.3684757858670003e-13) >> print 4. args=().0rc1 An upper bound on the number of Chebyshev moments.5 f = lambda x.

epsabs : ﬂoat. rfun. ode.b. hfun : function 214 Chapter 4.y). and z=qfun(x.0rc1 hfun : callable The upper boundary curve in y (same requirements as gfun).49e-8. qfun. Default is 1. epsrel : ﬂoat Relative tolerance of the inner 1-D integrals. epsrel=1.y) Parameters func : function A Python function or method of at least three variables in the order (z. x). args : sequence. simps.4899999999999999e08. Returns y : ﬂoat The resultant integral. x) from x=a.10. y.SciPy Reference Guide. b. args=().b) : tuple The limits of integration in x: a < b gfun : function The lower boundary curve in y which is a function taking a single ﬂoating point argument (x) and returning a ﬂoating point result: a lambda function can be useful here.integrate. epsabs=1. a. y..special for coefﬁcients and roots of orthogonal polynomials scipy. romb scipy.49e-8.4899999999999999e-08) Compute a triple (deﬁnite) integral. Default is 1. optional Absolute tolerance passed directly to the inner 1-D quadrature integration.hfun(x).. hfun. Release 0. (a.. Reference . abserr : ﬂoat An estimate of the error.rfun(x. trapz. optional Extra arguments to pass to func2d. y=gfun(x). gfun. Return the triple integral of func(z.tplquad(func. See Also: quad single integral tplquad triple integral fixed_quad ﬁxed-order Gaussian quadrature quadrature adaptive Gaussian quadrature odeint.

y) and returns a ﬂoat. It must be a function that takes two ﬂoats in the order (x. Returns y : ﬂoat The resultant integral.6.SciPy Reference Guide. abserr : ﬂoat An estimate of the error.integrate) 215 . rfun : function The upper boundary surface in z. optional Relative tolerance of the innermost 1-D integrals. See Also: quad Adaptive quadrature using QUADPACK quadrature Adaptive Gaussian quadrature fixed_quad Fixed-order Gaussian quadrature dblquad Double integrals romb Integrators for sampled data trapz Integrators for sampled data simps Integrators for sampled data ode ODE integrators odeint ODE integrators scipy. Default is 1. Default is 1. Release 0. optional Absolute tolerance passed directly to the innermost 1-D quadrature integration.) args : Arguments Extra arguments to pass to func3d.49e-8.special For coefﬁcients and roots of orthogonal polynomials 4. (Same requirements as qfun.0rc1 The upper boundary curve in y (same requirements as gfun). epsrel : ﬂoat.49e-8.10. qfun : function The lower boundary surface in z. Integration and ODEs (scipy. epsabs : ﬂoat.

tol=1. simps.4899999999999999e-08. Integrate func from a to b using Gaussian quadrature with absolute tolerance tol.integrate.0rc1 scipy. args=(). Default is 5.fixed_quad(func.10. b. Parameters func : callable A Python function or method to integrate (must accept vector inputs). Returns val : ﬂoat Gaussian quadrature approximation to the integral See Also: quad adaptive quadrature using QUADPACK dblquad. maxiter=50. if any.4899999999999999e-08.SciPy Reference Guide. args=(). b : ﬂoat 216 Chapter 4. a : ﬂoat Lower limit of integration. Reference . optional Order of quadrature integration.integrate. Integrate func from a to b using Gaussian quadrature of order n. optional Extra arguments to pass to function. n=5) Compute a deﬁnite integral using ﬁxed-order Gaussian quadrature. tplquad romberg adaptive Romberg quadrature quadrature adaptive Gaussian quadrature romb. args : tuple. n : int. odeint scipy. trapz cumtrapz cumulative integration for sampled data ode. vec_func=True) Compute a deﬁnite integral using ﬁxed-tolerance Gaussian quadrature. Parameters func : function A Python function or method to integrate. rtol=1. a. a. b : ﬂoat Upper limit of integration. b.quadrature(func. a : ﬂoat Lower limit of integration. Release 0.

vec_func : bool. See Also: romberg adaptive Romberg quadrature fixed_quad ﬁxed-order Gaussian quadrature quad adaptive quadrature using QUADPACK dblquad double integrals tplquad triple integrals romb integrator for sampled data simps integrator for sampled data trapz integrator for sampled data cumtrapz cumulative integration for sampled data ode ODE integrator odeint ODE integrator 4. optional Maximum number of iterations.SciPy Reference Guide. optional Iteration stops when error between last two iterates is less than tol OR the relative change is less than rtol.0rc1 Upper limit of integration. Integration and ODEs (scipy. rol : ﬂoat.10. tol. args : tuple. err : ﬂoat Difference between last two estimates of the integral. Default is True.6. maxiter : int. optional True or False if func handles arrays as arguments (is a “vector” function). Release 0. optional Extra arguments to pass to function.integrate) 217 . Returns val : ﬂoat Gaussian quadrature approximation (within tolerance) to integral.

divmax : int. b : ﬂoat Upper limit of integration. a. See Also: fixed_quad Fixed-order Gaussian quadrature. args=(). tol=1.48e-8. b).48e-08. optional Extra arguments to pass to function. Default is 10. tol. optional Whether to print the results. Release 0. Reference . If show is 1. then function is assumed to support vector arguments. show : bool.10.e whether it is a “vector” function). If vec_func is True (default is False).SciPy Reference Guide. Returns results : ﬂoat Result of the integration. simps. Default is False.0rc1 scipy. the triangular array of the intermediate results will be printed. a : ﬂoat Lower limit of integration. quad Adaptive quadrature using QUADPACK. optional Whether func handles arrays as arguments (i. divmax=10. rtol=1. romb. b. ode. rtol : ﬂoat. odeint 218 Chapter 4. trapz cumtrapz Cumulative integration for sampled data. Other Parameters args : tuple. tplquad. Returns the integral of function (a function of one variable) over the interval (a.integrate. optional The desired absolute and relative tolerances. vec_func=False) Romberg integration of a callable function or method.48e-08. Default is False. dblquad. Default is to pass no extra arguments. Parameters function : callable Function to be integrated. Each element of args will be passed as a single argument to func. vec_func : bool. Defaults are 1. optional Maximum order of extrapolation.romberg(function. show=False.

axis=-1) Integrate along the given axis using the composite trapezoidal rule.6.integrate) 219 . show=True) Romberg integration of <function vfunc at 0x101eceaa0> from [0.pi) * np.integrate. Default is 1. optional If x is None.2 Integrating functions.419184 0.6. >>> print 2*result. axis.421350 0. axis]) cumtrapz(y[.421356 0. 4.special import erf >>> gaussian = lambda x: 1/np.sqrt(np.421350 0.500000 0. optional Specify the axis.421350 0. dx : scalar. dx.421350 0. scipy. 1. >>> from scipy. 0. spacing given by dx is assumed. dx=1.421350 0. x. given ﬁxed samples trapz(y[. x=None. 1] Steps 1 2 4 8 16 32 StepSize 1.84270079295 4. then spacing between all y elements is dx.421551 0.421215 0.421317 0.421350 0.421350396475 after 33 function evaluations.10. x. axis. even]) romb(y[.420810 0. Cumulatively integrate y(x) using samples along the given axis and the composite trapezoidal rule. Parameters y : array_like Input array to integrate.421352 0.385872 0. x : array_like. Integrate y (x) along given axis. Integration and ODEs (scipy. x. show]) Integrate along the given axis using the composite trapezoidal rule.421350 0.exp(-x**2) >>> result = romberg(gaussian.trapz(y. dx.421350 0. optional If x is None.SciPy Reference Guide. dx.erf(1) 0. dx.0rc1 References [R1] Examples Integrate a gaussian from 0 to 1 and compare to the error function.250000 0.421350 The ﬁnal result is 0.421350 0. axis]) simps(y[.84270079295 0. Release 0.0.062500 0.125000 0.412631 0. Integrate y(x) using samples along the given axis and the composite Romberg integration using samples of a function.421350 0.000000 0.421368 0.031250 Results 0. Returns out : ﬂoat Deﬁnite integral as approximated by trapezoidal rule. axis : int.

0. Return value will be equal to combined area under the red lines. axis=-1) Cumulatively integrate y(x) using samples along the given axis and the composite trapezoidal rule.arange(6).2. x=[4. optional dx : int.SciPy Reference Guide.cumtrapz(y.3]) 4. optional Speciﬁes the axis to cumulate: • -1 –> X axis • 0 –> Z axis • 1 –> Y axis See Also: quad adaptive quadrature using QUADPACK romberg adaptive Romberg quadrature quadrature adaptive Gaussian quadrature 220 Chapter 4. axis=1) array([ 2. dx=1.trapz([1.trapz(a.. Reference . optional axis : int. 2].0rc1 See Also: sum.10.0 >>> a = np. [3. 4.5. Parameters y : array x : array.trapz([1. cumsum Notes Image [R3] illustrates trapezoidal rule – y-axis locations of points will be taken from y array. [R3] Examples >>> np.8]) 8.3]. 3) >>> a array([[0.0.2.2.6.3].0 >>> np. alternatively they can be provided with x array or with dx scalar. 1.trapz(a.integrate. References [R2]. If x is None. axis=0) array([ 1.5]) >>> np. dx=2) 8.]) scipy.0 >>> np. 3. Release 0. 5]]) >>> np. spacing given by dx is assumed.reshape(2.trapz([1. x=None.5. 8. by default x-axis distances between points will be 1. 2.

optional ‘avg’ [Average two results:1) use the ﬁrst N-2 intervals with] a trapezoidal rule on the last interval and 2) use the last N-2 intervals with a trapezoidal rule on the ﬁrst interval. Integration and ODEs (scipy.SciPy Reference Guide. Release 0. The parameter ‘even’ controls how this is handled. 4. optional Axis along which to integrate. then there are an odd number of intervals (N-1).6.0rc1 fixed_quad ﬁxed-order Gaussian quadrature dblquad double integrals tplquad triple integrals romb integrators for sampled data trapz integrators for sampled data cumtrapz cumulative integration for sampled data ode ODE integrators odeint ODE integrators scipy. ‘last’ [Use Simpson’s rule for the last N-2 intervals with a] trapezoidal rule on the ﬁrst interval.integrate. axis : int. N. the points at which y is sampled. x : array_like. optional Spacing of integration points along axis of y. Only used when x is None. Parameters y : array_like Array to be integrated. dx : int. optional If given. axis=-1.integrate) 221 . x=None.simps(y. but Simpson’s rule requires an even number of intervals. even=’avg’) Integrate y(x) using samples along the given axis and the composite Simpson’s rule. even : {‘avg’. Default is the last axis. ‘str’}.10. ‘ﬁrst’ [Use Simpson’s rule for the ﬁrst N-2 intervals with] a trapezoidal rule on the last interval. dx=1. If there are an even number of samples. If x is None. spacing of dx is assumed. Default is 1. ‘ﬁrst’.

optional The sample spacing.SciPy Reference Guide. then the result is exact only if the function is a polynomial of order 2 or less. axis=-1. show=False) Romberg integration using samples of a function.integrate.romb(y. optional The axis along which to integrate. optional When y is a single 1-D array. Reference .0. scipy. If the samples are not equally spaced. Parameters y : array_like A vector of 2**k + 1 equally-spaced samples of a function. Default is False.0rc1 See Also: quad adaptive quadrature using QUADPACK romberg adaptive Romberg quadrature quadrature adaptive Gaussian quadrature fixed_quad ﬁxed-order Gaussian quadrature dblquad double integrals tplquad triple integrals romb integrators for sampled data trapz integrators for sampled data cumtrapz cumulative integration for sampled data ode ODE integrators odeint ODE integrators Notes For an odd number of samples that are equally spaced the result is exact if the function is a polynomial of order 3 or less.10. Release 0. then if this argument is True print the table showing Richardson extrapolation from the samples. dx=1. axis : array_like?. Returns ret : array_like? 222 Chapter 4. Default is 1. Default is -1 (last axis). dx : array_like. show : bool.

Dfun. full_output : boolean True if to return a dictionary of optional outputs as the second output printmessg : boolean 4. trapz.]) ode(f[. 4. mxordn=12. ode. jac]) complex_ode(f[. args=(). printmessg=0) Integrate a system of ordinary differential equations.0.10.6.. hmin=0. Dfun : callable(y. tcrit=None.0. Solve a system of ordinary differential equations using lsoda from the FORTRAN library odepack.SciPy Reference Guide. ml=None.. y0..integrate) 223 . jac]) Integrate a system of ordinary differential equations. args. hmax=0. mxstep=0.. col_deriv=0. full_output=0. rtol=None. mxhnil=0. args : tuple Extra arguments to pass to function. t[.special for orthogonal polynomials (special) for Gaussian quadrature roots and weights for other weighting factors and regions. mu=None. col_deriv. y0. Dfun=None. col_deriv : boolean True if Dfun deﬁnes derivatives down columns (faster). Integration and ODEs (scipy. scipy.odeint(func. quadrature.6. Release 0. odeint See Also: scipy.) where y can be a vector. .. fixed_quad. cumtrapz. simps. tplquad.integrate. A generic interface class to numeric integrators.t0. romberg. atol=None..0. ixpr=0.) Computes the derivative of y at t0. t0. . t : array A sequence of time points for which to solve for y. The initial value point should be the ﬁrst element of this sequence..0rc1 The integrated result for each axis. mxords=5. h0=0.. otherwise Dfun should deﬁne derivatives across rows. Parameters func : callable(y. A wrapper of ode for complex systems. dblquad. y0 : array Initial condition on y (can be a vector).3 Integrators of ODE systems odeint(func. t.. . t0. See Also: quad. Solves the initial value problem for stiff or non-stiff systems of ﬁrst order ode-s: dy/dt = func(y.) Gradient (Jacobian) of func.

g. greater than 1. according to an inequality of the form max-norm of (e / ewt) <= 1. hmin : ﬂoat. len(y0)) Array containing the value of y for each desired time in t. -1 otherwise. of estimated local errors in y. (will always be at least as large as the input times). where ewt is a vector of positive error weights computed as: ewt = rtol * abs(y) + atol rtol and atol can be either vectors the same length as y or scalars. Dfun should return a matrix whose columns contain the non-zero bands (starting with the lowest diagonal).0rc1 Whether to print the convergence message Returns y : array. the return matrix from Dfun should have shape len(y0) * (ml + mu + 1) when ml >=0 or mu >=0 rtol. infodict : dict. ‘tolsf’ vector of tolerance scale factors. Reference .49012e-8. (0: solver-determined) 224 Chapter 4. ‘lenrw’ the length of the double work array required. 2: bdf (stiff) Other Parameters ml. Thus.10. ‘leniw’ the length of integer work array required. singularities) where integration care should be taken. These give the number of lower and upper non-zero diagonals in this banded matrix. computed when a request for too much accuracy was detected. ‘tsw’ value of t at the time of the last method switch (given for each time step) ‘nst’ cumulative number of time steps ‘nfe’ cumulative number of function evaluations for each time step ‘nje’ cumulative number of jacobian evaluations for each time step ‘nqu’ a vector of method orders for each successful step.0. shape (len(t). For the banded case. h0 : ﬂoat. only returned if full_output == True Dictionary containing additional output information key ‘hu’ ‘tcur’ meaning vector of step sizes successfully used for each time step. Defaults to 1. vector with the value of t reached for each time step. with the initial value y0 in the ﬁrst row. The solver will control the vector. Release 0.SciPy Reference Guide. mu : integer If either of these are not-None or non-negative. hmax : ﬂoat. tcrit : array Vector of critical points (e. (0: solver-determined) The maximum absolute step size allowed. ‘mused’a vector of method indicators for each successful time step: 1: adams (nonstiff). ‘imxer’ index of the component of largest magnitude in the weighted local error vector (e / ewt) on an error return. atol : ﬂoat The input parameters rtol and atol determine the error control performed by the solver. then the Jacobian is assumed to be banded. (0: solver-determined) The step size to be attempted on the ﬁrst step. e.

y) with (optional) jac = df/dy. y. (0: solver-determined) Maximum number of (internally deﬁned) steps allowed for each integration point in t. See Also: ode a more object-oriented integrator based on VODE quad for ﬁnding the area under a curve class scipy. jac[i.0rc1 The minimum absolute step size allowed. mxordn : integer. (0: solver-determined) Maximum number of messages printed. mxhnil : integer. (0: solver-determined) Maximum order to be allowed for the nonstiff (Adams) method. Integration and ODEs (scipy. *f_args) Rhs of the equation.shape == (n. Release 0. mxstep : integer. with ﬁxed-leadingcoefﬁcient implementation.ode(f. y. y.j] = d f[i] / d y[j] jac_args is set by calling set_f_params(*args) See Also: odeint an integrator with a simpler interface based on lsoda from ODEPACK quad for ﬁnding the area under a curve Notes Available integrators are listed below. ixpr : boolean Whether to generate extra printing at method switches.integrate. They can be selected using the set_integrator method.SciPy Reference Guide.6. “vode” Real-valued Variable-coefﬁcient Ordinary Differential Equation solver. jac=None) A generic interface class to numeric integrators.integrate) 225 . 4. mxords : integer. It provides implicit Adams method (for non-stiff problems) and a method based on backward differentiation formulas (BDF) (for stiff problems).10. Parameters f : callable f(t. *jac_args) Jacobian of the rhs. Solve an equation system y (t) = f (t.). f_args is set by calling jac : callable jac(t. (0: solver-determined) Maximum order to be allowed for the stiff (BDF) method. set_f_params(*args) t is a scalar.

j] != 0 for i-lband <= j <= i+rband. This integrator accepts the same parameters in set_integrator as the “vode” solver. order <= 12 for Adams. This integrator accepts the following parameters in set_integrator method of the ode class: •atol : ﬂoat or sequence absolute tolerance for solution •rtol : ﬂoat or sequence relative tolerance for solution •lband : None or int •rband : None or int Jacobian band width. •order : int Maximum order used by the integrator. jac[i. and this fact is critical in the way ZVODE solves the dense or banded linear systems that arise in the stiff case. Source: http://www. that is. Adams (non-stiff) or BDF (stiff) •with_jacobian : bool Whether to use the jacobian •nsteps : int Maximum number of (internally deﬁned) steps allowed during one call to the solver. Setting these requires your jac routine to return the jacobian in packed format.10.org/ode/zvode. j] = jac[i. •method: ‘adams’ or ‘bdf’ Which solver to use. •ﬁrst_step : ﬂoat •min_step : ﬂoat •max_step : ﬂoat Limits for the step sizes used by the integrator. For a complex stiff ODE system in which f is not analytic. It provides implicit Adams method (for non-stiff problems) and a method based on backward differentiation formulas (BDF) (for stiff problems).0rc1 Source: http://www. “dopri5” This is an explicit runge-kutta method of order (4)5 due to Dormand & Prince (with stepsize control and dense output). Release 0. with ﬁxed-leadingcoefﬁcient implementation.SciPy Reference Guide. “zvode” Complex-valued Variable-coefﬁcient Ordinary Differential Equation solver. and for this problem one should instead use DVODE on the equivalent real system (in the real and imaginary parts of y).f Warning: This integrator is not re-entrant.netlib. You cannot have two ode instances using the “zvode” integrator at the same time.netlib. You cannot have two ode instances using the “vode” integrator at the same time. Reference . Analyticity means that the partial derivative df(i)/dy(j) is a unique complex number. Note: When using ZVODE for a stiff system.j]. Authors: 226 Chapter 4.org/ode/vode. ZVODE is likely to have convergence failures. jac_packed[i-j+lband.f Warning: This integrator is not re-entrant. it should only be used for the case in which the function f is analytic. <= 5 for BDF. when each f(i) is an analytic function of each y(j).

successful() and r.hairer@math. Options and references the same as “dopri5”. method=’bdf’.integrate) .integrate(r. [0.10.wanner@math.unige. with_jacobian=True) r.t. de Mathematiques CH-1211 Geneve 24.ch.y Attributes t y ﬂoat ndarray Current time Current variable values 227 4. Hairer and G.ch This code is described in [HNW93]. Wanner Universite de Geneve.0). t0 = [1. arg1): return [1j*arg1*y[0] + y[1]. Switzerland e-mail: ernst. 0 def f(t. -arg1*y[1]**2] def jac(t.0].3) due to Dormand & Prince (with stepsize control and dense output).t < t1: r. -arg1*2*y[1]]] The integration: >>> >>> >>> >>> >>> >>> >>> r = ode(f.0j. r. Dept.SciPy Reference Guide. Release 0. t0).0) t1 = 10 dt = 1 while r. y.unige.set_f_params(2.t+dt) print r. “dop853” This is an explicit runge-kutta method of order 8(5. •ﬁrst_step : ﬂoat •max_step : ﬂoat •safety : ﬂoat Safety factor on new step selection (default 0. Integration and ODEs (scipy. 2.6.set_initial_value(y0.set_jac_params(2. This integrator accepts the following parameters in set_integrator() method of the ode class: •atol : ﬂoat or sequence absolute tolerance for solution •rtol : ﬂoat or sequence relative tolerance for solution •nsteps : int Maximum number of (internally deﬁned) steps allowed during one call to the solver.9) •ifactor : ﬂoat •dfactor : ﬂoat Maximum factor to increase/decrease step size by in one step •beta : ﬂoat Beta parameter for stabilised step size control. References [HNW93] Examples A problem to integrate and the corresponding jacobian: >>> >>> >>> >>> >>> >>> >>> >>> from scipy. jac). gerhard. 1].integrate import ode y0. arg1): return [[1j*arg1.0rc1 E. y.set_integrator(’zvode’.

and wrappers for FITPACK and DFITPACK functions. y. but re-maps a complex-valued equation system to a real-valued one before using the integrators.shape == (n. this sub-package contains spline functions and classes. jac=None) A wrapper of ode for complex systems.0rc1 Methods integrate set_f_params set_initial_value set_integrator set_jac_params successful class scipy. Reference .interpolate) Sub-package for objects used in interpolation. y. set_f_params(*args) jac : jac(t. 228 Chapter 4.j] = d f[i] / d y[j] jac_args is set by calling set_f_params(*args) Examples For usage examples. Attributes t y ﬂoat ndarray Current time Current variable values t is a scalar. one-dimensional and multi-dimensional (univariate and multivariate) interpolation classes. As listed below. *jac_args) Jacobian of the rhs. jac[i. Release 0.7 Interpolation (scipy.10.complex_ode(f. see ode. y. *f_args) Rhs of the equation.SciPy Reference Guide. Lagrange and Taylor polynomial interpolators. Parameters f : callable f(t.integrate.). f_args is set by calling Methods integrate set_f_params set_initial_value set_integrator set_jac_params successful 4. This functions similarly as ode.

‘cubic’) or as an integer specifying the order of the spline interpolator to use.interpolate) 229 . If not provided. kind. kind : str or int. copy : bool. . axis=-1. x) krogh_interpolate(xi. yi. Interpolation defaults to the last axis of y. der]) piecewise_polynomial_interpolate(xi. See Also: UnivariateSpline A more recent wrapper of the FITPACK routines. copy.nan) Interpolate a 1-D function. the class makes internal copies of x and y.7. references to x and y are used. The length of y along the interpolation axis must be equal to the length of x. Release 0. splrep. x) Interpolate a 1-D function.0rc1 4. optional Speciﬁes the axis of y along which to interpolate. optional If provided. Default is ‘linear’. Convenience function for piecewise polynomial interpolation class scipy. ﬁll_value=np. optional If True. yi[.SciPy Reference Guide.7. axis. ‘quadratic. interp2d 4. an error is raised. then the default is NaN. yi) PiecewisePolynomial(xi.. bounds_error : bool. If False. This class returns a function whose call method uses interpolation to ﬁnd the value of new points. By default. y : array_like A N-D array of real values. ‘slinear’. ﬁll_value : ﬂoat. yi..10. y[. bounds_error=True. Interpolation (scipy.1 Univariate interpolation interp1d(x.]) BarycentricInterpolator(xi[. then this value will be used to ﬁll in for requested points outside of the data range.’nearest’. axis : int. If False. orders. out of bounds values are assigned ﬁll_value. x and y are arrays of values used to approximate some function f: y = f(x). y.interpolate. copy=True. yi. optional Speciﬁes the kind of interpolation as a string (‘linear’. an error is thrown any time interpolation is attempted on a value outside of the range of x (where extrapolation is necessary). direction]) barycentric_interpolate(xi.interp1d(x. splev. The default is to copy. optional If True. kind=’linear’. x[. Parameters x : array_like A 1-D array of monotonically increasing real values. The interpolating polynomial for a set of points The interpolating polynomial for a set of points Piecewise polynomial curve speciﬁed by points and derivatives Convenience function for polynomial interpolation Convenience function for polynomial interpolation. ‘zero’. yi]) KroghInterpolator(xi.

interpolate. cos(i*pi/n)) are a good choice . Based on [R4].SciPy Reference Guide. unless the x coordinates are chosen very carefully . When an xi occurs two or more times in a row. Allows evaluation of the polynomial and all its derivatives. This class uses a “barycentric interpolation” method that treats the problem as a special case of rational function interpolation. N by R Known y-coordinates.0) f = sp. interpreted as vectors of length R. yi) The interpolating polynomial for a set of points Constructs a polynomial that passes through a given set of points. length N Known x-coordinates yi : array-like. ’o’. Be aware that the algorithms implemented here are not necessarily the most numerically stable known. Moreover.arange(0. even with well-chosen x values. Reference . although they can be obtained by evaluating all the derivatives. ynew. In general.polynomial interpolation itself is a very ill-conditioned process due to the Runge phenomenon.KroghInterpolator(xi. cos(i*pi/n)) are a good choice . Allows evaluation of the polynomial. and updating by adding more x values.1) >>> ynew = f(xnew) # use interpolation function returned by ‘interp1d‘ >>> plt. numerically. For reasons of numerical stability.BarycentricInterpolator(xi. 10) y = np. xnew. For reasons of numerical stability. This algorithm is quite stable. y. y) >>> xnew = np.plot(x.g.g. Release 0.interpolate.exp(-x/3. 230 Chapter 4. 0. yi=None) The interpolating polynomial for a set of points Constructs a polynomial that passes through a given set of points.Chebyshev zeros (e.polynomial interpolation itself is a very ill-conditioned process due to the Runge phenomenon. optionally with speciﬁed derivatives at those points.9. the corresponding yi’s represent derivative values. or scalars if R=1. efﬁcient changing of the y values to be interpolated.10. degrees higher than about thirty cause problems with numerical instability in this code. ’-’) Methods __call__ class scipy. Parameters xi : array-like.0rc1 Examples >>> >>> >>> >>> import scipy. this function does not compute the coefﬁcients of the polynomial.Chebyshev zeros (e. Methods __call__ add_xi set_yi class scipy. unless the x coordinates are chosen very carefully . Based on Berrut and Trefethen 2004. “Barycentric Lagrange Interpolation”.interp1d(x. this function does not compute the coefﬁcients of the polynomial.interpolate x = np.interpolate. but even in a world of exact computation. even in a world of exact computation.arange(0.

Based on Berrut and Trefethen 2004.10. unless the x coordinates are chosen very carefully . this function does not compute the coefﬁcients of the polynomial. Interpolation (scipy. 4. x) Convenience function for polynomial interpolation Constructs a polynomial that passes through a given set of points. Methods __call__ append derivative derivatives extend scipy. This is what this function uses internally. orders=None.PiecewisePolynomial(xi.SciPy Reference Guide. but even in a world of exact computation. cos(i*pi/n)) are a good choice polynomial interpolation itself is a very ill-conditioned process due to the Runge phenomenon.Chebyshev zeros (e. yi. For reasons of numerical stability. The degree should not exceed about thirty.interpolate. This function uses a “barycentric interpolation” method that treats the problem as a special case of rational function interpolation. numerically. Release 0. yi. x : scalar or array_like of length M Returns y : scalar or array_like of length R or length M or M by R The shape of y depends on the shape of x and whether the interpolator is vectorvalued or scalar-valued. Notes Construction of the interpolation weights is a relatively slow process. Parameters xi : array_like of length N The x coordinates of the points the polynomial should pass through yi : array_like N by R The y coordinates of the points the polynomial should pass through. It passes through a list of points and has speciﬁed derivatives at each point. The degree of the polynomial may very from segment to segment. Appending points to the end of the curve is efﬁcient.interpolate. if R>1 the polynomial is vector-valued. “Barycentric Lagrange Interpolation”. direction=None) Piecewise polynomial curve speciﬁed by points and derivatives This class represents a curve that is a piecewise polynomial. then evaluates the polynomial.barycentric_interpolate(xi.interpolate) 231 .7. If you want to call this many times with the same xi (but possibly varying yi or x) you should use the class BarycentricInterpolator. as may the number of derivatives available.g. This algorithm is quite stable.0rc1 References [R4] Methods __call__ derivative derivatives class scipy.

of length N. Of length N.yi). “Efﬁcient Algorithms for Polynomial Interpolation and Numerical Differentiation” The polynomial passes through all the pairs (xi. If you want to evaluate it repeatedly consider using the class KroghInterpolator (which is what this function uses). Based on Krogh 1970.interpolate.Chebyshev zeros (e. orders=None. If x is a scalar. der=0) Convenience function for piecewise polynomial interpolation Parameters xi : array_like A sorted list of x-coordinates. x : scalar or array_like Of length M.0rc1 scipy. yi. or scalars if R=1 x : scalar or array_like of length N Point or points at which to evaluate the derivatives der : integer or list How many derivatives to extract. this function does not compute the coefﬁcients of the polynomial. unless the x coordinates are chosen very carefully . interpreted as vectors of length R. der=0) Convenience function for polynomial interpolation. if the yi are scalars then the last dimension will be dropped.krogh_interpolate(xi. N by R known y-coordinates. x. Be aware that the algorithms implemented here are not necessarily the most numerically stable known. even with well-chosen x values. the middle dimension will be dropped. One may additionally specify a number of derivatives at each point xi. Evaluates the polynomial or some of its derivatives. scipy. or a list of derivatives to extract. Returns d : ndarray If the interpolator’s values are R-dimensional then the returned array will be the number of derivatives by N by R. Moreover.piecewise_polynomial_interpolate(xi.interpolate. Release 0.10. yi : list of lists yi[i] is the list of derivatives known at xi[i]. 232 Chapter 4. x. length N known x-coordinates yi : array_like. Reference . Constructs a polynomial that passes through a given set of points. optionally with speciﬁed derivatives at those points. In general. yi. cos(i*pi/n)) are a good choice . even in a world of exact computation. This number includes the function value as 0th derivative. although they can be obtained by evaluating all the derivatives. this is done by repeating the value xi and specifying the derivatives as successive yi values.g. For reasons of numerical stability. Parameters xi : array_like.SciPy Reference Guide. Notes Construction of the interpolating polynomial is a relatively expensive process.polynomial interpolation itself is a very ill-conditioned process due to the Runge phenomenon. degrees higher than about thirty cause problems with numerical instability in this code. None for all potentially nonzero derivatives (that is a number equal to the number of points).

griddata(points. kind. C1 smooth. .interpolate.. Release 0. values) CloughTocher2DInterpolator(points. curvature-minimizing interpolant in values[. ‘cubic’}. 4. New in version 0.]) Interpolate over a 2-D grid. shape (. The code will try to use an equal number of derivatives from each end.. then some derivatives will be ignored. Nearest-neighbour interpolation in N dimensions.. of length R or length M or M by R. values. Interpolation (scipy.9.. Parameters points : ndarray of ﬂoats. an exception is raised. shape (npoints. ﬁll_value=nan) Interpolate unstructured N-dimensional data. ndims) Data point coordinates. or a single universal order der : int Which single derivative to extract. if the total number of derivatives needed is odd. y. Notes If orders is None. xi : ndarray of ﬂoat. ndim) Points where to interpolate data at.10. Rbf(*args) A class for radial basis function approximation/interpolation of n-dimensional scattered data. values. Construction of these piecewise polynomials can be an expensive process.. optional Method of interpolation. interp2d(x.2 Multivariate interpolation Unstructured data: griddata(points. ndim). values : ndarray of ﬂoat or complex.7. One of • nearest: return the value at the data point closest to the point of interpolation. method.interpolate) 233 . ‘nearest’.0rc1 orders : int or list of ints a list of polynomial orders... shape (npoints. if you repeatedly evaluate the same polynomial. Returns y : scalar or array_like The result.SciPy Reference Guide.]) LinearNDInterpolator(points. . method=’linear’.) Data values. Interpolate unstructured N-dimensional data. consider using the class PiecewisePolynomial (which is what this function does). scipy. 4.. xi[. z[. then the degree of the polynomial segment is exactly the degree required to match all i available derivatives at both endpoints. If not enough derivatives are available. See NearestNDInterpolator for more details. If orders[i] is not None. . copy. values) NearestNDInterpolator(points. Can either be a ndarray of size (npoints.7. Piecewise cubic. or orders[i] is None. xi. it will prefer the rightmost endpoint. Piecewise linear interpolant in N dimensions. or a tuple of ndim arrays. method : {‘linear’.. tol]) 2D.

subplot(222) plt. 6) plt. origin=’lower’) plt. See CloughTocher2DInterpolator for more details.imshow(grid_z1.0]. extent=(0.10.imshow(func(grid_x.mgrid[0:1:100j. ’k.subplot(223) plt. grid_y).imshow(grid_z2. origin=’lower’) plt.0]. (grid_x. values. 0:1:200j] but we only know its values at 1000 data points: >>> points = np.pi*x) * np.set_size_inches(6. (grid_x. optional Value used to ﬁll in for requested points outside of the convex hull of the input points. See LinearNDInterpolator for more details.show() 234 Chapter 4.1).title(’Original’) plt. method=’nearest’) grid_z1 = griddata(points.SciPy Reference Guide. grid_y). and interpolate linearly on each simplex.0.0.imshow(grid_z0.plot(points[:. grid_y).rand(1000. grid_y = np. (grid_x. extent=(0.1).1.title(’Nearest’) plt. Examples Suppose we want to interpolate the 2-D function >>> def func(x. method=’cubic’) One can see that the exact result is reproduced by all of the methods to some degree.interpolate import griddata grid_z0 = griddata(points.’.1.1). points[:. y): >>> return x*(1-x)*np. values. Reference . extent=(0. 1]x[0. origin=’lower’) plt. Release 0.gcf(). This option has no effect for the ‘nearest’ method. then the default is nan.T. values. ms=1) plt.subplot(221) plt. ﬁll_value : ﬂoat.0.title(’Cubic’) plt. 1] >>> grid_x. continuously differentiable (C1). • cubic (1-D): return the value detemined from a cubic spline.random. origin=’lower’) plt.T.T.pyplot as plt plt.1. method=’linear’) grid_z2 = griddata(points.1]. points[:.0. but for this smooth function the piecewise cubic interpolant gives the best results: >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> import matplotlib.0rc1 • linear: tesselate the input point set to n-dimensional simplices.sin(4*np.subplot(224) plt.1.1). and approximately curvature-minimizing polynomial surface.1]) This can be done with griddata – below we try out all of the interpolation methods: >>> >>> >>> >>> from scipy.title(’Linear’) plt.pi*y**2)**2 on a grid in [0. 2) >>> values = func(points[:. If not provided. grid_y).T. extent=(0. • cubic (2-D): return the value determined from a piecewise cubic.cos(4*np.

0 0.0 0.4 0.0 class scipy.0 0.9.0 Linear 1. optional Value used to ﬁll in for requested points outside of the convex hull of the input points. and on each triangle performing linear barycentric interpolation.8 1. values) Piecewise linear interpolant in N dimensions.. Release 0.6 0.0 0.0 Cubic 1.6 0. Notes The interpolant is constructed by triangulating the input data with Qhull [Qhull].interpolate.0rc1 1. ndims) Data point coordinates.6 0.2 0. shape (npoints.8 1. then the default is nan.2 0.0 0. 4..6 0.interpolate) 235 .10.4 0.) Data values.4 0.0 0. New in version 0.8 0.2 0.2 0.0 0.6 0.LinearNDInterpolator(points. If not provided.4 0. shape (npoints.0 0.0 0.8 0.2 0.4 0.SciPy Reference Guide. ﬁll_value : ﬂoat.0 0.2 Original 1.4 0.8 1.8 0.2 0.0 0.4 0.8 0.6 0.7. Interpolation (scipy. Parameters points : ndarray of ﬂoats. values : ndarray of ﬂoat or complex.6 0.0 0.2 Nearest 0.0 0.6 0.8 1. .4 0.

shape (npoints. New in version 0.Renka84]_. ndims) Data point coordinates.spatial. tol=1e-6) Piecewise cubic. shape (npoints.) Data values. optional Value used to ﬁll in for requested points outside of the convex hull of the input points. Notes The interpolant is constructed by triangulating the input data with Qhull [Qhull]. .. ﬁll_value : ﬂoat.. [Nielson83].NearestNDInterpolator(points. The gradients necessary for this are estimated using the global algorithm described in [Nielson83.10. values. maxiter : int. . optional Maximum number of iterations in gradient estimation. shape (npoints.interpolate. tol : ﬂoat. Reference .CloughTocher2DInterpolator(points.SciPy Reference Guide.9. C1 smooth. The interpolant is guaranteed to be continuously differentiable. and constructing a piecewise cubic interpolating Bezier polynomial on each triangle.0rc1 References [Qhull] Methods __call__ class scipy. using a Clough-Tocher scheme [CT].interpolate. values) Nearest-neighbour interpolation in N dimensions. [CT]. The gradients of the interpolant are chosen so that the curvature of the interpolating surface is approximatively minimized.. If not provided.. values : ndarray of ﬂoat or complex. ndims) Data point coordinates. then the default is nan. References [Qhull].cKDTree Methods __call__ class scipy. New in version 0. Parameters points : ndarray of ﬂoats.9. values : ndarray of ﬂoat or complex. curvature-minimizing interpolant in 2D. shape (npoints. Parameters points : ndarray of ﬂoats. optional Absolute/relative tolerance for gradient estimation. Notes Uses scipy. [Renka84] 236 Chapter 4. Release 0.) Data values.

0rc1 Methods __call__ class scipy..newaxis] such that the result is a matrix of the distances from each point in x1 to each point in x2. Examples >>> rbfi = Rbf(x. optional A function that returns the ‘distance’ between two points. then it must take 2 arguments (self. .0/sqrt((r/self.). optional The radial basis function. y. d) >>> di = rbfi(xi. y. with inputs as arrays of positions (x.. 0 is for interpolation (default).epsilon)**2) ’linear’: r ’cubic’: r**3 ’quintic’: r**5 ’thin_plate’: r**2 * log(r) If callable..:] and x2=x2[ndims.Rbf(*args) A class for radial basis function approximation/interpolation of n-dimensional scattered data. d.SciPy Reference Guide.g. smooth : ﬂoat. E.epsilon)**2 + 1) ’inverse’: 1. the function will always go through the nodal points in this case.newaxis. Parameters *args : arrays x. y.10. z. the default is ‘multiquadric’: ’multiquadric’: sqrt((r/self. are the coordinates of the nodes and d is the array of values at the nodes function : str or callable. Interpolation (scipy. x2): return sqrt( ((x1 . z. . optional Adjustable constant for gaussian or multiquadrics functions .epsilon.defaults to approximate average distance between nodes (which is a good start). z. where x. zi) # radial basis function interpolator instance # interpolated values Methods __call__ 4.:. z. r).. .sum(axis=0) ) which is called with x1=x1[ndims..interpolate.interpolate) 237 . The epsilon parameter will be available as self. based on the radius. yi.epsilon)**2 + 1) ’gaussian’: exp(-(r/self.7.. epsilon : ﬂoat. r. Other keyword arguments passed in will be available as well. norm : callable. Release 0. the default: def euclidean_norm(x1. given by the norm (defult is Euclidean distance). y. and an output as an array of distance. optional Values greater than zero increase the smoothness of the approximation..x2)**2).

optional If True.10.0rc1 class scipy. bisplev BivariateSpline a more recent wrapper of the FITPACK routines interp1d Notes The minimum number of data points required along the interpolation axis is (k+1)**2.2.1. when interpolated values are requested outside of the domain of the input data. kind : {‘linear’. bounds_error=False. then ﬁll_value is used. then data is copied.0. ‘cubic’.interp2d(x. If False. Release 0. If z is a multi-dimensional array. y). [4.2. ﬁll_value=nan) Interpolate over a 2-D grid. y = [0.1.2]. 238 Chapter 4. k=3 for cubic and k=5 for quintic interpolation. y. z : 1-D ndarray The values of the function to interpolate at the data points.6] If x and y are multi-dimensional.3. This class returns a function whose call method uses spline interpolation to ﬁnd the value of new points. z = [[1. ‘quintic’}. x and y must specify the full coordinates for each point. with k=1 for linear. bounds_error : bool. copy=True. Parameters x. it is ﬂattened before use. otherwise only a reference is held. for example: >>> x = [0.5. y and z are arrays of values used to approximate some function f: z = f(x. the value to use for points outside of the interpolation domain. The interpolator is constructed by bisplrep. z. for example: >>> x = [0.0. kind=’linear’. y = [0.1. If more control over smoothing is needed. If the points lie on a regular grid. z = [1. with a smoothing factor of 0. y : 1-D ndarrays Arrays deﬁning the data point coordinates. x can specify the column coordinates and y the row coordinates.3.6]] Otherwise.3].0.2]. optional If True.SciPy Reference Guide. Reference .3]. See Also: bisplrep. optional The kind of spline interpolation to use. x.interpolate. optional If provided. ﬁll_value : number.5. Defaults to NaN.3]. Default is ‘linear’.3.2. bisplrep should be used directly. copy : bool. they are ﬂattened before use.4. an error is raised.

interpolate. None[.arange(-5.10. 5. Interpolation (scipy.25) np.sin(xx**2+yy**2) sp. k]) One-dimensional smoothing spline ﬁt to a given set of data points.arange(-5. of the same length as x. y.arange(-5. w : array_like. z.0rc1 Examples Construct a 2-D grid and interpolate on it: >>> >>> >>> >>> >>> x = y = xx. Parameters x : array_like 1-D array of independent input data. 5. y. Must be increasing. None]. xnew. w. bbox. optional Weights for spline ﬁtting. ..plot(x. bbox.SciPy Reference Guide. k : int.meshgrid(x.01. y : array_like 1-D array of dependent input data. weights are all equal. s speciﬁes the number of knots by specifying a smoothing condition.01. x[-1]].3 1-D Splines UnivariateSpline(x. bbox. s=None) One-dimensional smoothing spline ﬁt to a given set of data points. z.]) See Also: scipy. 0]. y.01. ynew) plt.map_coordinates Bivariate spline approximation over a rectangular mesh. y[. w. y data. y[. None.7.interpolate) 239 . 1e-2) ynew = np. 5.01. optional 4. ’b-’) Methods __call__ For data on a grid: RectBivariateSpline(x. bbox=[None. y) np. 5.01. kind=’cubic’) Now use the obtained interpolation function and plot the result: >>> >>> >>> >>> xnew = np. Release 0. w.. If None (default).25) yy = np. k.01. w=None. s]) InterpolatedUnivariateSpline(x. If None (default).7. znew[:. class scipy.01.ndimage. Must be positive. 1e-2) znew = f(xnew.interpolate. optional 2-sequence specifying the boundary of the approximation interval.arange(-5.interp2d(x.UnivariateSpline(x.01. 0. bbox : array_like. t[. One-dimensional interpolating spline for a given set of data points. k=3. 0]. Fits a spline y=s(x) of degree k to the provided x. z[:. k]) LSQUnivariateSpline(x. bbox=[x[0]. y. 4. ’ro-’. 0. One-dimensional spline with explicit internal knots. z = f = np.

y. optional Positive smoothing factor used to choose the number of knots.interpolate. bbox=[None. spline will interpolate through all data points.interpolate import UnivariateSpline x = linspace(-3. y.InterpolatedUnivariateSpline(x. 3.SciPy Reference Guide. Must be <= 5. y data. Examples >>> >>> >>> >>> >>> >>> >>> >>> from numpy import linspace. y. k=3) One-dimensional interpolating spline for a given set of data points. splint. Reference . Number of knots will be increased until the smoothing condition is satisﬁed: sum((w[i]*(y[i]-s(x[i])))**2.10. s=1) xs = linspace(-3. Spline function passes through all provided points. s=len(w) which should be a good value if 1/w[i] is an estimate of the standard deviation of y[i]. spalde BivariateSpline A similar class for two-dimensional spline interpolation Notes The number of data points must be larger than the spline degree k.exp from numpy.0rc1 Degree of the smoothing spline. non object-oriented wrapping of FITPACK splev.random import randn from scipy.ys is now a smoothed. sproot. Fits a spline y=s(x) of degree k to the provided x. Equivalent to UnivariateSpline with s=0.axis=0) <= s If None (default). See Also: InterpolatedUnivariateSpline Subclass with smoothing forced to 0 LSQUnivariateSpline Subclass in which knots are user-selected instead of being set by smoothing condition splrep An older. super-sampled version of the noisy gaussian x. 100) y = exp(-x**2) + randn(100)/10 s = UnivariateSpline(x. Methods __call__ derivatives get_coeffs get_knots get_residual integral roots set_smoothing_factor class scipy. Release 0. If 0. w=None. 1000) ys = s(xs) xs. 3. 240 Chapter 4. None]. s : ﬂoat or None.

ys is now a smoothed. y. spalde BivariateSpline A similar class for two-dimensional spline interpolation Notes The number of data points must be larger than the spline degree k. s=1) xs = linspace(-3. 1000) ys = s(xs) xs. weights are all equal. 100) y = exp(-x**2) + randn(100)/10 s = UnivariateSpline(x. If None (default). bbox : array_like. Release 0.0rc1 Parameters x : array_like input dimension of data points – must be increasing y : array_like input dimension of data points w : array_like. optional 2-sequence specifying the boundary of the approximation interval. Examples >>> >>> >>> >>> >>> >>> >>> >>> from numpy import linspace.exp from numpy. super-sampled version of the noisy gaussian x.10.x[-1]]. Interpolation (scipy. bbox=[x[0]. optional Degree of the smoothing spline. See Also: UnivariateSpline Superclass – allows knots to be selected by a smoothing condition LSQUnivariateSpline spline for which knots are user-selected splrep An older. k : int.7. optional Weights for spline ﬁtting. splint. Must be <= 5.y 4. sproot. If None (default). 3.SciPy Reference Guide. 3. Must be positive. non object-oriented wrapping of FITPACK splev.random import randn from scipy.interpolate import UnivariateSpline x = linspace(-3.interpolate) 241 .

non object-oriented wrapping of FITPACK splev.<t[1]<bbox[-1] w : array_like. Release 0. splint. k : int.interpolate. Must be <= 5. bbox=[None.x[-1]]. y.. t.SciPy Reference Guide. t speciﬁes the internal knots of the spline Parameters x : array_like input dimension of data points – must be increasing y : array_like input dimension of data points t: array_like : interior knots of the spline. optional weights for spline ﬁtting. Raises ValueError : If the interior knots do not satisfy the Schoenberg-Whitney conditions See Also: UnivariateSpline Superclass – knots are speciﬁed by setting a smoothing condition InterpolatedUnivariateSpline spline passing through all points splrep An older.0rc1 Methods __call__ derivatives get_coeffs get_knots get_residual integral roots set_smoothing_factor class scipy. y data. Must be positive. Must be in ascending order and bbox[0]<t[0]<. bbox : array_like. sproot. optional Degree of the smoothing spline. Fits a spline y=s(x) of degree k to the provided x. None].. optional 2-sequence specifying the boundary of the approximation interval. w=None. spalde BivariateSpline A similar class for two-dimensional spline interpolation 242 Chapter 4. If None (default). Reference . bbox=[x[0].LSQUnivariateSpline(x. weights are all equal.10. If None (default). k=3) One-dimensional spline with explicit internal knots.

tck[. are returned. dy]) Find the B-spline representation of 1-D curve. tck[. nu]) UnivariateSpline.interpolate. kx. task=0.interpolate import LSQUnivariateSpline x = linspace(-3. Find the roots of a cubic B-spline. s=None.3.0. Uses the FORTRAN routine curﬁt from FITPACK. Evaluate a B-spline or its derivatives. ext]) splint(a. Find a bivariate B-spline representation of a surface. y[. 4. Release 0.3.. y.get_coeffs() UnivariateSpline. k.get_residual() UnivariateSpline. .0rc1 Notes The number of data points must be larger than the spline degree k. mest]) spalde(x.1] s = LSQUnivariateSpline(x.100) y = exp(-x**2) + randn(100)/10 t = [-1.1.10. k. xb.y. xb. w=None. The coefﬁcients. ue.splrep(x. Return all derivatives of the spline at the point x. xe..]) splev(x. . z[.get_knots() UnivariateSpline. Evaluate a bivariate B-spline and its derivatives. per=0.exp from numpy.__call__(x[. y.set_smoothing_factor(s) Low-level interface to FITPACK functions: splrep(x.. Examples >>> >>> >>> >>> >>> >>> >>> >>> >>> from numpy import linspace. full_output=0. .. Interpolation (scipy. xe. t. tck[.derivatives(x) UnivariateSpline. Evaluate the deﬁnite integral of a B-spline. dx.ys is now a smoothed. task. quiet=1) Find the B-spline representation of 1-D curve.1000) ys = s(xs) xs.. full_output]) sproot(tck[. t. Evaluate all derivatives of a B-spline.y with knots [-3.3] Methods __call__ derivatives get_coeffs get_knots get_residual integral roots set_smoothing_factor The above univariate spline classes have the following methods: UnivariateSpline. w. Return spline coefﬁcients.t) xs = linspace(-3. s. b) UnivariateSpline. Return the positions of (boundary and interior) Return weighted sum of squared residuals of the spline Continue spline computation with the given smoothing scipy. t. b..integral(a. der. Return deﬁnite integral of the spline between two Return the zeros of the spline.]) splprep(x[.7. w. yb. k=3. tck) bisplrep(x. and the knot points. ub.]) bisplev(x.-1. w. y. s. super-sampled version of the noisy gaussian x. Evaluate spline (or its nu-th derivative) at positions x. ye.random import randn from scipy. task.interpolate) 243 . xb=None. y[i]) determine a smooth spline approximation of degree k on the interval xb <= x <= xe.SciPy Reference Guide.0. xe=None. c. t=None. Given the set of data points (x[i].roots() UnivariateSpline. Find the B-spline representation of an N-dimensional curve. u.

Default is ones(len(x)). 1 <= k <= 5 task : {1. y. If the errors in the y values have standard-deviation given by the vector d. Release 0.g))**2. If the weights represent the inverse of the standard-deviation of y. Even order splines should be avoided especially with small s values. w. Returns tck : tuple 244 Chapter 4.0 (interpolating) if no weights are supplied. If given then task is automatically set to -1. Larger s means more smoothing while smaller values of s indicate less smoothing. data points are considered periodic with period x[m-1] . The weights are used in computing the weighted least-squares spline ﬁt. y : array_like The data points deﬁning a curve y = f(x). quiet : bool Non-zero to suppress messages. The user can use s to control the tradeoff between closeness and smoothness of ﬁt. and w.y). s = 0. then w should be 1/d. full_output : bool If non-zero. these default to x[0] and x[-1] respectively. then return optional outputs. s. s. s : ﬂoat A smoothing condition. The amount of smoothness is determined by satisfying the conditions: sum((w * (y . per : bool If non-zero. These should be interior knots as knots on the ends will be added automatically. t. If None. default : s=m-sqrt(2*m) if weights are supplied. There must have been a previous call with task=0 or task=1 for the same set of data (t will be stored an used internally) If task=-1 ﬁnd the weighted least square spline for a given set of knots.SciPy Reference Guide.0rc1 Parameters x.m+sqrt(2*m)) where m is the number of datapoints in x. It is recommended to use cubic splines.10. Reference . Values of y[m-1] and w[m-1] are not used. Recommended values of s depend on the weights.x[0] and a smooth periodic spline approximation is returned. k : int The order of the spline ﬁt. xe : ﬂoat The interval to ﬁt. xb. 0. then a good s value should be found in the range (m-sqrt(2*m).axis=0) <= s where g(x) is the smoothed interpolation of (x. If task==1 ﬁnd t and c for another value of the smoothing factor. w : array_like Strictly positive rank-1 array of weights the same length as x and y. t : int The knots needed for task=-1. -1} If task==0 ﬁnd t and c for a given smoothing factor.

optional An integer ﬂag about splrep success.c. these values are calculated automatically as M = len(x[0]): v[0] = 0 v[i] = v[i-1] + distance(x[i]. Given a list of N rank-1 arrays. k=3.interpolate) 245 . If not given. which represent a curve in N-dimensional space parametrized by u. See Also: UnivariateSpline. splprep. splint. nest=None. u : array_like. msg : str. y2) BivariateSpline. splev. bisplev Notes See splev for evaluation of the spline and its derivatives. optional The weighted sum of squared residuals of the spline approximation.SciPy Reference Guide. full_output=0.7. Release 0. s=None. y) x2 = linspace(0. Parameters x : array_like A list of sample vector arrays representing the curve. optional The end-points of the parameters interval. ue=None. 10) y = sin(x) tck = splrep(x. [3]. y. bisplrep. x2. Interpolation (scipy. ue : int. and [4]: [R22]. task=0.interpolate.10. Success is indicated if ier<=0. fp : array. x. spalde. tck) plot(x. quiet=1) Find the B-spline representation of an N-dimensional curve. optional A message corresponding to the integer ﬂag. optional An array of parameter values.splprep(x. 10. optional 4. t=None. References Based on algorithms described in [1].0rc1 (t. Uses the FORTRAN routine parcur from FITPACK. ﬁnd a smooth approximating spline curve g(u). u=None.k) a tuple containing the vector of knots. [R24]. [R25] Examples >>> >>> >>> >>> >>> >>> x = linspace(0. [2]. If ier in [1. ub=None. Otherwise an error is raised. the B-spline coefﬁcients. Defaults to u[0] and u[-1]. scipy. ier : int. w=None.3] an error occurred but was not raised. 10. and the degree of the spline.x[i-1]) u[i] = v[i] / v[M-1] ub. [R23]. k : int. ’o’. 200) y2 = splev(x2. sproot.2. per=0. ier.

k) containing the vector of knots. ﬁnd t and c for a given smoothing factor. and the degree of the spline. msg : str A message corresponding to the integer ﬂag. optional A smoothing condition. By default nest=m/2. If ier in [1. If the weights represent the inverse of the standard-deviation of y. default is 3. Success is indicated if ier<=0. per : int. ier : int An integer ﬂag about splrep success. 246 Chapter 4. The user can use s to control the trade-off between closeness and smoothness of ﬁt. Even values of k should be avoided especially with a small s-value. Cubic splines are recommended. s : ﬂoat.x[0] and a smooth periodic spline approximation is returned. t. Returns tck : tuple A tuple (t. and w. where m is the number of data points in x. u : array An array of the values of the parameter. fp : ﬂoat The weighted sum of squared residuals of the spline approximation. optional If task==0 (default). y. nest : int. Otherwise an error is raised. then a good s value should be found in the range (m-sqrt(2*m). s. ier.c. 1 <= k <= 5.y). The amount of smoothness is determined by satisfying the conditions: sum((w * (y . Release 0. where g(x) is the smoothed interpolation of (x. then return optional outputs.3] an error occurred but was not raised. s. optional Non-zero to suppress messages. w.m+sqrt(2*m)). Recommended values of s depend on the weights. optional An over-estimate of the total number of knots of the spline to help in determining the storage space. Values of y[m-1] and w[m-1] are not used.10.SciPy Reference Guide. the B-spline coefﬁcients. ﬁnd t and c for another value of the smoothing factor.axis=0) <= s.0rc1 Degree of the spline.2. optional The knots needed for task=-1. Larger s means more smoothing while smaller values of s indicate less smoothing. If task==1. quiet : int. full_output : int. Always large enough is nest=m+k+1. If task=-1 ﬁnd the weighted least square spline for a given set of knots. Reference . t : int. task : int. optional If non-zero. optional If non-zero.g))**2. There must have been a previous call with task=0 or task=1 for the same set of data. data points are considered periodic with period x[m-1] .

splint(a. b. Given the knots and coefﬁcients of a B-spline representation. splev. [R15]. If tck was returned from splprep. sproot. • if ext=1. spalde. References [R19]. ext : int Controls the value returned for elements of x not in the interval deﬁned by the knot sequence.interpolate. evaluate the value of the smoothing polynomial and its derivatives. return 0 • if ext=2. der : int The order of derivative of the spline to compute (must be less than or equal to k). tck : tuple A sequence of length 3 returned by splrep or splprep containing the knots. splint.0rc1 See Also: splrep. coefﬁcients.7. [R16] scipy.interpolate.interpolate) 247 . bisplrep. sproot. [R21] scipy. This is a wrapper around the FORTRAN routines splev and splder of FITPACK. Returns y : ndarray or list of ndarrays An array of values representing the spline function evaluated at the points in x. bisplev References [R14]. Release 0. der=0. Parameters x : array_like A 1-D array of points at which to return the value of the smoothed spline or its derivatives. If tck was returned from splrep. return the extrapolated value. Interpolation (scipy. bisplrep. u should be given.10. 4. • if ext=0. splint. tck. UnivariateSpline. BivariateSpline Notes See splev for evaluation of the spline and its derivatives. splrep. raise a ValueError The default value is 0. bisplev. ext=0) Evaluate a B-spline or its derivatives. spalde. then the parameter values. and degree of the spline.splev(x. See Also: splprep. [R20]. then this is a list of arrays representing the curve in N-dimensional space.SciPy Reference Guide. full_output=0) Evaluate the deﬁnite integral of a B-spline. tck.

See Also: splprep. splev. b : ﬂoat The end-points of the integration interval. spalde.c. [R18] scipy. See Also: splprep.interpolate. splrep.0rc1 Given the knots and coefﬁcients of a B-spline. the B-spline coefﬁcients. bisplrep.sproot(tck. Returns zeros : ndarray An array giving the roots of the spline. Returns integral : ﬂoat The resulting integral. evaluate the deﬁnite integral of the smoothing polynomial between two given points. The knots must be a montonically increasing sequence.SciPy Reference Guide. mest=10) Find the roots of a cubic B-spline. Parameters tck : tuple A tuple (t. bisplrep. UnivariateSpline. Release 0.k) containing the vector of knots. The number of knots must be >= 8. UnivariateSpline. splev. and the degree of the spline.c. sproot. splint. bisplev. bisplev. mest : int An estimate of the number of zeros (Default is 10). and the degree of the spline (see splev). wrk : ndarray An array containing the integrals of the normalized B-splines deﬁned on the set of knots. Parameters a. BivariateSpline 248 Chapter 4. full_output : int.10. optional Non-zero to return optional output. tck : tuple A tuple (t. Given the knots (>=8) and coefﬁcients of a cubic B-spline return the roots of the spline. BivariateSpline References [R17]. the B-spline coefﬁcients.k) containing the vector of knots. splrep. Reference . spalde.

tx=None. See Also: splprep. 4. xe=None. ty=None. eps=9. and the degree of the spline.10. y[i]. splrep. task=0. [R28] scipy. Given the knots and coefﬁcients of a cubic B-spline compute all derivatives up to order k at a point (or set of points). nyest=None. the B-spline coefﬁcients.y). full_output=0. w=None.7. [R13] scipy. splev. quiet=1) Find a bivariate B-spline representation of a surface. ky=3. Parameters tck : tuple A tuple (t. optional End points of approximation interval in x. By default w=np. splint. x : array_like A point or a set of points at which to evaluate the derivatives. tck) Evaluate all derivatives of a B-spline. UnivariateSpline. ky : int. xe=x. y. optional Rank-1 array of weights. optional End points of approximation interval in y.0rc1 References [R26]. Given a set of data points (x[i]. s=None. yb=None.k) containing the vector of knots. z[i]) representing a surface z=f(x.interpolate.c. BivariateSpline References [R11]. compute a B-spline representation of the surface. Returns results : array_like An array (or a list of arrays) containing all derivatives up to order k inclusive for each point x. ye : ﬂoat. xe : ﬂoat. Release 0. bisplev. xb. xb=None.9999999999999998e-17. Based on the routine SURFIT from FITPACK. z.max().interpolate) 249 .SciPy Reference Guide. Interpolation (scipy. z : ndarray Rank-1 arrays of data points. Parameters x. y.spalde(x.min(). By default yb=y. kx. w : ndarray.min(). yb. [R12]. bisplrep. [R27].bisplrep(x. kx=3. sproot.interpolate. optional By default xb = x. ye=None. ye = y.ones(len(x)). Note that t(k) <= x <= t(n-k+1) must hold for each x. nxest=None.max().

bisplrep must have been previously called with task=0 or task=1. optional Over-estimates of the total number of knots.3] an error occurred but was not raised. then nxest = max(kx+sqrt(m/2). optional If task=0.SciPy Reference Guide.2*kx+3). ty) and coefﬁcients (c) of the bivariate B-spline representation of the surface along with the degree of the spline. optional A non-negative smoothing factor. tx. ﬁnd knots and coefﬁcients for another value of the smoothing factor. optional Non-zero to suppress printing of messages. ﬁnd coefﬁcients for a given set of knots tx. ky <= 5). Returns tck : array_like A list [tx. BivariateSpline Notes See bisplev to evaluate the value of the B-spline given its tck representation. ty : ndarray.0rc1 The degrees of the spline (1 <= kx. If task=1. If task=-1. optional A threshold for determining the effective rank of an over-determined linear system of equations (0 < eps < 1). splrep. ier : int An integer ﬂag about splrep success. eps : ﬂoat. If weights correspond to the inverse of the standard-deviation of the errors in z. eps is not likely to need changing. If ier in [1. optional Non-zero to return optional outputs. nyest : int.10. sproot. ier. See Also: splprep. quiet : int. fp : ndarray The weighted sum of squared residuals of the spline approximation. msg : str A message corresponding to the integer ﬂag. splev. max(ky+sqrt(m/2).2*ky+3). Third order (kx=ky=3) is recommended. ky] containing the knots (tx. UnivariateSpline. splint. s : ﬂoat. kx. then a good s-value should be found in the range (m-sqrt(2*m). If None nyest = 250 Chapter 4. ﬁnd knots in x and y and coefﬁcients for a given smoothing factor. nxest. Otherwise an error is raised. Reference .2. ty.m+sqrt(2*m)) where m=len(x). s. ty. Success is indicated if ier<=0. task : int. s. optional Rank-1 arrays of the knots of the spline for task=-1 full_output : int. Release 0. c.

ty. References [R5]. . splint.size). Parameters x. ky].RectBivariateSpline(x.4 2-D Splines For data on a grid: RectBivariateSpline(x. y : ndarray Rank-1 arrays specifying the domain over which to evaluate the spline or its derivative. optional The orders of the partial derivatives in x and y respectively. splev. z : array_like 2-D array of data with shape (x. Parameters x. Return a rank-2 array of spline function values (or spline derivative values) at points given by the cross-product of the rank-1 arrays x and y. None]. Based on BISPEV from FITPACK. [R6]. kx. Release 0. z.bisplev(x. y. dy=0) Evaluate a bivariate B-spline and its derivatives.size.SciPy Reference Guide. c.interpolate. Returns vals : ndarray The B-spline or its derivative evaluated over the set formed by the cross-product of x and y. class scipy. the coefﬁcients. None. BivariateSpline Notes See bisplrep to generate the tck representation. Interpolation (scipy.7.0rc1 References [R8]. In special cases. sproot.y : array_like 1-D arrays of coordinates in strictly ascending order.interpolate) 251 . [R9]. y. None. s=0) Bivariate spline approximation over a rectangular mesh. return an array or just a ﬂoat if either x or y or both are ﬂoats..10. tck : tuple A sequence of length 5 returned by bisplrep containing the knot locations.]) Bivariate spline approximation over a rectangular mesh. kx=3. z. y. tck. dx. See Also: splprep. [R10] scipy. dy : int. bbox=[None.7.y.. UnivariateSpline. 4. ky=3. and the degree of the spline: [tx. dx=0.interpolate. None. Can be used for both smoothing and interpolating data. splrep. [R7] 4. None[.

y) of degrees kx and ky on the rectangle [xb.y[i])))**2. condition: Default is class scipy..y. ye] calculated from a given set of data points (x.axis=0) <= s s=0.xe] x [yb.max(x.. Release 0. See Also: bisplrep.xe] x [yb. optional Sequence of length 4 specifying the boundary of the rectangular approximation domain. ye] calculated from a given set of data points (x.BivariateSpline Bivariate spline s(x. Default is 3. y. z.z). Smooth bivariate spline approximation.ty). Reference . . None. s : ﬂoat.10. z. Weighted least-squares bivariate spline approximation.]) LSQBivariateSpline(x. y. bbox=[min(x. See Also: SmoothBivariateSpline a smoothing bivariate spline for scattered data bisplrep. By default. ty.ty)].y) of degrees kx and ky on the rectangle [xb. ky : ints.z).0rc1 bbox : array_like.y. min(y. None[.max(y. optional Positive smoothing factor deﬁned for estimation sum((w[i]*(z[i]-s(x[i].interpolate. kx. None. tx.tx). which is for interpolation. bisplev UnivariateSpline a similar class for univariate spline interpolation Methods __call__ ev get_coeffs get_knots get_residual integral For unstructured data: BivariateSpline SmoothBivariateSpline(x. None) Bivariate spline s(x.SciPy Reference Guide. bisplev UnivariateSpline a similar class for univariate spline interpolation SmoothUnivariateSpline to create a BivariateSpline through the given points LSQUnivariateSpline to create a BivariateSpline using weighted least-squares ﬁtting 252 Chapter 4. optional Degrees of the bivariate spline.tx).

10. ky : ints. w=None. z. Parameters x.interpolate. ky=3. s : ﬂoat. bbox=[None.max(x.SmoothBivariateSpline(x. Interpolation (scipy. None].ty)]. By default.0rc1 Methods __call__ ev get_coeffs get_knots get_residual integral class scipy.axis=0) <= s Default s=len(w) which should be a good value if 1/w[i] is an estimate of the standard deviation of z[i]. kx. bbox : array_like. the default is 1e-16.7. optional Positive 1-D sequence of weights. Default is 3. s=None. 4. min(y.ty).max(y. None. Release 0. y.interpolate) 253 . w : array_lie. eps=None) Smooth bivariate spline approximation.y[i])))**2.tx). optional Sequence of length 4 specifying the boundary of the rectangular approximation domain. eps should have a value between 0 and 1. kx=3. None.SciPy Reference Guide. optional A threshold for determining the effective rank of an over-determined linear system of equations. eps : ﬂoat. y and z should be at least (kx+1) * (ky+1). optional Degrees of the bivariate spline. optional Positive smoothing factor deﬁned for estimation condition: sum((w[i]*(z[i]-s(x[i]. y. See Also: bisplrep. bbox=[min(x.tx). z : array_like 1-D sequences of data points (order is not important). bisplev UnivariateSpline a similar class for univariate spline interpolation LSQUnivariateSpline to create a BivariateSpline using weighted Notes The length of x.

254 Chapter 4. By default. min(y. kx. w=None.axis=0) <= s Default s=len(w) which should be a good value if 1/w[i] is an estimate of the standard deviation of z[i]. optional Sequence of length 4 specifying the boundary of the rectangular approximation domain. bbox : array_like.tx). eps : ﬂoat. y and z should be at least (kx+1) * (ky+1).ty)]. kx=3. optional Positive smoothing factor deﬁned for estimation condition: sum((w[i]*(z[i]-s(x[i]. tx.max(y. w : array_lie.SciPy Reference Guide. optional Degrees of the bivariate spline. bbox=[min(x. eps=None) Weighted least-squares bivariate spline approximation.10. None].tx). None. y. bisplev UnivariateSpline a similar class for univariate spline interpolation SmoothUnivariateSpline To create a BivariateSpline through the given points Notes The length of x. z : array_like 1-D sequences of data points (order is not important).interpolate. Reference . Default is 3. See Also: bisplrep. optional A threshold for determining the effective rank of an over-determined linear system of equations. tx. s : ﬂoat. ky=3. eps should have a value between 0 and 1. bbox=[None. z. the default is 1e-16. Parameters x.max(x. None. ky : ints. y.ty). ty.y[i])))**2. optional Positive 1-D sequence of weights. Release 0.0rc1 Methods __call__ ev get_coeffs get_knots get_residual integral class scipy. ty : array_like Strictly ordered 1-D sequences of knots coordinates.LSQBivariateSpline(x.

If task=1. y.9999999999999998e-17. w=None. Parameters x. yb.ones(len(x)). optional If task=0. optional By default xb = x. ye. xb=None. z : ndarray Rank-1 arrays of data points. Based on the routine SURFIT from FITPACK.7. ﬁnd knots and coefﬁcients for another value of the smoothing factor..]) bisplev(x. Interpolation (scipy. xb.0rc1 Methods __call__ ev get_coeffs get_knots get_residual integral Low-level interface to FITPACK functions: bisplrep(x. w : ndarray. s : ﬂoat. optional Rank-1 array of weights. dx. eps is not likely to need changing. If weights correspond to the inverse of the standard-deviation of the errors in z. bisplrep must have been previously called with task=0 or task=1. compute a B-spline representation of the surface. optional End points of approximation interval in y.min(). task : int. quiet=1) Find a bivariate B-spline representation of a surface. s. ye : ﬂoat. xe=x. ye=None. xe=None. Third order (kx=ky=3) is recommended. full_output=0. yb=None. w. xe.m+sqrt(2*m)) where m=len(x). optional A non-negative smoothing factor. kx=3.SciPy Reference Guide. optional End points of approximation interval in x. eps=9. y. tx. yb.max(). kx. ye = y. z[. . y. If task=-1.max(). ﬁnd coefﬁcients for a given set of knots tx. task=0. tx=None. By default w=np.. ty : ndarray.interpolate) 255 . s=None. nyest=None. xb.10. ky <= 5). xe : ﬂoat. Release 0. kx. ﬁnd knots in x and y and coefﬁcients for a given smoothing factor. eps : ﬂoat. nxest=None.y).min(). z. then a good s-value should be found in the range (m-sqrt(2*m). z[i]) representing a surface z=f(x. optional The degrees of the spline (1 <= kx. scipy. ty=None. y.bisplrep(x. ky : int. By default yb=y. Given a set of data points (x[i].interpolate. s. 4. y[i]. ty. dy]) Find a bivariate B-spline representation of a surface. ky=3. tck[. Evaluate a bivariate B-spline and its derivatives. optional A threshold for determining the effective rank of an over-determined linear system of equations (0 < eps < 1).

tck : tuple A sequence of length 5 returned by bisplrep containing the knot locations. [R10] scipy. If ier in [1. kx. the coefﬁcients. ky] containing the knots (tx. nyest : int.2*ky+3). ier : int An integer ﬂag about splrep success. Returns tck : array_like A list [tx. return an array or just a ﬂoat if either x or y or both are ﬂoats. max(ky+sqrt(m/2). Release 0. and the degree of the spline: [tx. References [R8]. tck. nxest. optional Over-estimates of the total number of knots.2*kx+3). Return a rank-2 array of spline function values (or spline derivative values) at points given by the cross-product of the rank-1 arrays x and y. c.2. Reference . optional Non-zero to suppress printing of messages. Success is indicated if ier<=0. UnivariateSpline. ty. ier. See Also: splprep. c.bisplev(x. optional If None nyest = 256 Chapter 4. optional Non-zero to return optional outputs.SciPy Reference Guide. ky]. msg : str A message corresponding to the integer ﬂag. In special cases. dy=0) Evaluate a bivariate B-spline and its derivatives. ty. splrep.interpolate. y : ndarray Rank-1 arrays specifying the domain over which to evaluate the spline or its derivative. dy : int. kx.0rc1 Rank-1 arrays of the knots of the spline for task=-1 full_output : int. dx. Otherwise an error is raised. then nxest = max(kx+sqrt(m/2). [R9]. y. dx=0. quiet : int. ty) and coefﬁcients (c) of the bivariate B-spline representation of the surface along with the degree of the spline. splint.3] an error occurred but was not raised. splev. Based on BISPEV from FITPACK.10. BivariateSpline Notes See bisplev to evaluate the value of the B-spline given its tck representation. Parameters x. fp : ndarray The weighted sum of squared residuals of the spline approximation. sproot.

w). x : scalar The point at which the polynomial is to be evaluated. Should accept a vector of x values. w) Return a Lagrange interpolating polynomial. i. References [R5]. scipy.lagrange(x. Release 0. splev. BivariateSpline Notes See bisplrep to generate the tck representation.0rc1 The orders of the partial derivatives in x and y respectively. x. Return a Lagrange interpolating polynomial. x. sproot. Do not expect to be able to use more than about 20 points even if they are chosen optimally.interpolate.interpolate) 257 .e. w) approximate_taylor_polynomial(f. splint. Given two 1-D arrays x and w. Estimate the Taylor polynomial of f at x by polynomial ﬁtting. See Also: splprep. [R6]. scale. .approximate_taylor_polynomial(f. Parameters x : array_like x represents the x-coordinates of a set of datapoints. Warning: This implementation is numerically unstable. splrep.SciPy Reference Guide. degree : int The degree of the Taylor polynomial scale : scalar The width of the interval to use to evaluate the Taylor polynomial. Function values spread over a range this wide are used to ﬁt the polynomial. f(x).7.) scipy. Interpolation (scipy. [R7] 4.10. degree. UnivariateSpline.. degree. Must be chosen carefully.interpolate. returns the Lagrange interpolating polynomial through the points (x. 4. order=None) Estimate the Taylor polynomial of f at x by polynomial ﬁtting. Parameters f : callable The function whose Taylor polynomial is sought. w : array_like w represents the y-coordinates of a set of datapoints.. Returns vals : ndarray The B-spline or its derivative evaluated over the set formed by the cross-product of x and y.7.5 Additional tools lagrange(x.

signal.signal. scipy.mat extension to the end of the given ﬁlename.io.signal.loadmat(ﬁle_name. so that for example p(0)=f(x)).cspline1d_eval. too large and the function differs from its Taylor polynomial too much to get a good answer. scipy. mdict=None.qspline2d. **kwargs) Load MATLAB ﬁle Parameters ﬁle_name : str Name of the mat ﬁle (do not need . use degree. **kwargs[. scipy. byte_order : str or None.map_coordinates. scipy.qspline1d.10. classes. optional Dictionary in which to insert matﬁle variables.qspline1d_eval.cspline2d.signal. Reference . appendmat : bool.. scipy. mdict[.. too small and round-off errors overwhelm the higher-order terms.signal. Returns p : poly1d instance The Taylor polynomial (translated to the origin. Release 0.resample. .routines. See Also: scipy. scipy. m_dict : dict.SciPy Reference Guide.8.io (in Numpy) 4.bspline. The algorithm used becomes numerically unstable around order 30 even under ideal circumstances.spline_filter.signal. mdict. appendmat=True.io) SciPy has many modules. See Also: numpy-reference.1 MATLAB® ﬁles loadmat(ﬁle_name. f will be evaluated order+1 times.cspline1d. Choosing order somewhat larger than degree may improve the higher-order terms. if not already present. scipy. optional 258 Chapter 4. appendmat]) savemat(ﬁle_name. scipy. If None. and functions available to read data from and write data to a variety of ﬁle formats.signal. optional True to append the . 4.mat extension if appendmat==True) Can also pass open ﬁle-like object. scipy.0rc1 order : int or None The order of the polynomial to be used in the ﬁtting. appendmat.signal. scipy.8 Input and output (scipy.ndimage. scipy.signal.gauss_spline.ndimage. Notes The appropriate choice of “scale” is a trade-off.]) Load MATLAB ﬁle Save a dictionary of names and arrays into a MATLAB-style .mat ﬁle.

mat_dtype=True. ‘=’.SciPy Reference Guide. scipy. Parameters ﬁle_name : str or ﬁle-like object Name of the . Otherwise variable_names should be a sequence of strings. Setting this ﬂag to False replicates the behavior of scipy version 0. struct_as_record=True). ‘>’).3 format mat ﬁles. mdict. Returns mat_dict : dict dictionary with variable names as keys.10. oned_as=None) Save a dictionary of names and arrays into a MATLAB-style . optional Returns matrices as would be loaded by MATLAB (implies squeeze_me=False. ‘<’. or as old-style numpy arrays with dtype=object. Release 0.3 interface here. Because scipy does not supply one. ‘BIG’. chars_as_strings=False.mat ﬁle. matlab_compatible : bool.2 matﬁles are supported. struct_as_record : bool.read all variables in ﬁle. You will need an HDF5 python library to read matlab 7. return arrays in same dtype as would be loaded into MATLAB (instead of the dtype with which they are saved). mat_dtype : bool.0rc1 None by default. we do not implement the HDF5 / 7. appendmat=True.7. optional If True. squeeze_me : bool.io) 259 .mat ﬁle. appendmat : bool. Can also pass open ﬁle_like object.x (returning numpy object arrays). ‘little’. mdict : dict Dictionary from which to save matﬁle variables. because it allows easier round-trip load and save of MATLAB ﬁles. format=‘5’.mat ﬁle (.0).savemat(ﬁle_name.style .mat extension not needed if appendmat == True). chars_as_strings : bool. possibly saving some read processing.8. giving names of the matlab variables to read from the ﬁle. optional Whether to squeeze unit matrix dimensions or not. do_compression=False. variable_names : None or sequence If None (the default) . and loaded matrices as values Notes v4 (Level 1. long_ﬁeld_names=False. optional 4. implying byte order guessed from mat ﬁle. This saves the array objects in the given dictionary to a MATLAB. The default setting is True. v6 and v7 to 7.io. optional Whether to convert char arrays to string arrays. Otherwise can be one of (‘native’. optional Whether to load MATLAB structs as numpy record arrays. The reader will skip any variable with a name not in this sequence. Input and output (scipy.

warn("Using oned_as default value (’column’)" + " This will change to ’row’ in future versions". stacklevel=2) without being more speciﬁc as to precisely when the change will take place. optional 260 Chapter 4. True . See Also: mio4.MatFile5Writer is called. If ‘row’. If format == ’5’. idict=None. uncompressed_ﬁle_name=None. optional ‘5’ (the default) for MATLAB 5 and up (to 7.MatFile4Writer. python_dict=False..sav ﬁle variables python_dict: bool.. mio5. verbose=False) Read an IDL .MatFile5Writer Notes If format == ’4’.8. the object return is not a Python dictionary. optional Dictionary in which to insert . set this option to True.2).sav ﬁle Parameters ﬁle_name : str Name of the IDL save ﬁle. . optional If ‘column’. FutureWarning. but ﬁrst it executes: warnings. which sets oned_as to ‘row’ if it had been None.10. uncompressed_ﬁle_name : str. optional : By default.6+ do_compression : bool. idict : dict. optional False (the default) . None}. mio5.mat ﬁles long_ﬁeld_names : bool. write 1-D numpy arrays as row vectors. but a case-insensitive dictionary with item. Release 0. mio4. and call access to variables.io. idict. To get a standard Python dictionary.maximum ﬁeld name length in a structure is 63 characters which works for MATLAB 7. 4.2 IDL® ﬁles readsav(ﬁle_name[.MatFile4Writer is called.mat extension to the end of the given ﬁlename. Reference .]) Read an IDL . oned_as : {‘column’. the behavior depends on the value of format (see Notes below). which sets oned_as to ‘column’ if it had been None. attribute. python_dict. format : {‘5’. string.maximum ﬁeld name length in a structure is 31 characters which is the documented maximum length. Default is False. ‘row’.readsav(ﬁle_name. ‘4’ for MATLAB 4 . optional Whether or not to compress matrices on write. if not already present. ‘4’}. If None (the default).SciPy Reference Guide.0rc1 True (the default) to append the . write 1-D numpy arrays as column vectors.sav ﬁle scipy.

‘skew-symmetric’. readsav will use the tempfile module to determine a temporary ﬁlename automatically. optional Whether to print out information about the save ﬁle.sav ﬁles are uncompressed to this ﬁle.8.cols : int Number of matrix rows and columns entries : int Number of non-zero entries of a sparse matrix or rows*cols for a dense matrix format : {‘coordinate’. including the records read.3 Matrix Market ﬁles mminfo(source) mmread(source) mmwrite(target. Returns a: : Sparse or full matrix 4.mmread(source) Reads the contents of a Matrix Market ﬁle ‘ﬁlename’ into a matrix. then variables are written to the dictionary speciﬁed.io. and the updated dictionary is returned. Otherwise. . comment. a[.mminfo(source) Queries the contents of the Matrix Market ﬁle ‘ﬁlename’ to extract size and storage information. scipy. ﬁeld. ‘symmetric’. compressed .0rc1 This option only has an effect for . ‘array’} ﬁeld : {‘real’.mtx) or open ﬁle object Returns rows. and will remove the temporary ﬁle upon successfully reading it in.SciPy Reference Guide.mtz. this function returns a Python dictionary with all variable names in lowercase. precision]) Queries the contents of the Matrix Market ﬁle ‘ﬁlename’ to Reads the contents of a Matrix Market ﬁle ‘ﬁlename’ into a matrix. ‘pattern’. and available variables. attribute. and call access to variables. ‘hermitian’} scipy. Parameters source : ﬁle Matrix Market ﬁlename (extension .8. ‘integer’} symm : {‘general’. ‘complex’. Parameters source : ﬁle Matrix Market ﬁlename (extensions . Writes the sparse or dense matrix A to a Matrix Market formatted ﬁle. verbose : bool.io) 261 . Release 0. If a ﬁle name is speciﬁed.10.gz) or open ﬁle object. 4. If idict was speciﬁed.io. Returns idl_dict : AttrDict or dict If python_dict is set to False (default).sav ﬁles written with the /compress option.mtx. If python_dict is set to True. Input and output (scipy. this function returns a case-insensitive dictionary with item.

‘integer’}. optional precision : : Number of digits to display for real or complex values. Returns rate : int Sample rate of wav ﬁle data : numpy array Data read from wav ﬁle 262 Chapter 4.io.4 Other save_as_module([ﬁle_name.10. optional File name of the module to save. 4. ‘pattern’. Release 0. data) Return the sample rate (in samples/sec) and data from a WAV ﬁle Write a numpy array as a WAV ﬁle scipy.8. comment=’‘. ﬁeld=None. data]) Save the dictionary “data” into a module and shelf named save.5 Wav sound ﬁles (scipy. data=None) Save the dictionary “data” into a module and shelf named save.wavfile.save_as_module(ﬁle_name=None.mmwrite(target. optional The dictionary to store in the module.8.io.0rc1 scipy.SciPy Reference Guide. 4. ‘complex’. data : dict.read(ﬁle) Return the sample rate (in samples/sec) and data from a WAV ﬁle Parameters ﬁle : ﬁle Input wav ﬁle. a. rate.io. scipy.io. Reference .wavfile) read(ﬁle) write(ﬁlename. Parameters ﬁle_name : str. precision=None) Writes the sparse or dense matrix A to a Matrix Market formatted ﬁle.mtx) or open ﬁle object a : array like Sparse or full matrix comment : str comments to be prepended to the Matrix Market ﬁle ﬁeld : {‘real’. Parameters target : ﬁle Matrix Market ﬁlename (extension .

the relation (name of the dataset).io. Nchannels). etc. use a 2-D array of shape (Nsamples. The data is returned as a record array..0rc1 Notes •The ﬁle can be an open ﬁle or a ﬁlename. •The returned sample rate is a Python integer •The data is returned as a numpy array with a data-type determined from the ﬁle. data : ndarray A 1-D or 2-D numpy array of integer data-type.io) 263 . meta : MetaData Contains information about the arff ﬁle such as name and type of attributes.8. rate. Raises ‘ParseArffError‘ : This is raised if the given ﬁle is not ARFF-formatted. rate : int The sample rate (in samples/sec).8. NotImplementedError : 4. Input and output (scipy. Notes •Writes a simple uncompressed WAV ﬁle. accessible by attribute names. or ﬁlename to open. which can be accessed much like a dictionary of numpy arrays. if one of the attributes is called ‘pressure’. scipy.arff) loadarff(f) Read an arff ﬁle. then its ﬁrst 10 data points can be accessed from the data record array like so: data[’pressure’][0:10] Parameters f : ﬁle-like or str File-like object to read from.10.arff. data) Write a numpy array as a WAV ﬁle Parameters ﬁlename : ﬁle The name of the ﬁle to write (will be over-written).io.6 Arff ﬁles (scipy.loadarff(f ) Read an arff ﬁle.SciPy Reference Guide. For example. Returns data : record array The data of the arff ﬁle. •The bits-per-sample will be determined by the data-type. Release 0.io.write(ﬁlename. scipy.. 4.wavfile. •To write multiple-channels.

False when ﬁlename is a ﬁle-like object version : {1. representing the data points as NaNs. All other attributes correspond to global attributes deﬁned in the NetCDF ﬁle. dimensions. mode. netcdf_variable objects behave much like array objects deﬁned in numpy.10. shape.io. mapping dimension names to their associated lengths and variable names to variables. A data object for the netcdf module. Global ﬁle attributes are created by assigning to an attribute of the netcdf_ﬁle object. typecode.createVariable on the netcdf_ﬁle object.io.io. 4.. Release 0. It cannot read ﬁles with sparse data ({} in the ﬁle). version=1) A ﬁle object for NetCDF data. shape.7 Netcdf (scipy. Application programs should never modify these dictionaries.netcdf_variable(data. Notes This function should be able to read most arff ﬁles. Not implemented functionality include: •date type attributes •string type attributes It can read ﬁles with numeric and nominal attributes. See here for more info.SciPy Reference Guide. Parameters ﬁlename : string or ﬁle-like string -> ﬁlename mode : {‘r’. mode=’r’. this function can read ﬁles with missing data (? in the ﬁle). size. respectively. Default is True when ﬁlename is a ﬁle name. Reference . mmap=None.. A netcdf_ﬁle object has two standard attributes: dimensions and variables. The values of both are dictionaries. typecode.) A ﬁle object for NetCDF data. class scipy. tributes=None) A data object for the netcdf module. the entire array can be accessed 264 Chapter 4. However. optional Whether to mmap ﬁlename when reading. . ‘w’}.netcdf) netcdf_file(ﬁlename[. optional read-write mode.netcdf.netcdf.0rc1 The ARFF ﬁle has an attribute which is not supported yet. version]) netcdf_variable(data. 2}. Methods close createDimension createVariable flush sync class scipy.8. default is ‘r’ mmap : None or bool. except that their data resides in a ﬁle. Default is 1. size. mmap. optional version of netcdf to read / write.netcdf_file(ﬁlename. Data is read by indexing and written by assigning to an indexed subset. at- netcdf_variable objects are constructed by calling the method netcdf_ﬁle. where 1 means Classic format and 2 means 64-bit offset format.

linalg) Linear algebra functions.0rc1 by the index [:] or (for scalars) by using the methods getValue and assignValue.9. 4.9 Linear algebra (scipy. See Also: numpy. identically named functions from scipy.linalg) 265 . shape Methods assignValue getValue itemsize typecode list of str List of names of dimensions used by the variable object.SciPy Reference Guide. but the shape cannot be modiﬁed. There is another read-only attribute dimensions. dimensions : sequence of strings The names of the dimensions used by the variable. Parameters data : array_like The data array that holds the values for the variable.linalg may offer more or slightly differing functionality.linalg for more linear algebra functions. attributes : dict. size : int Desired element size for the data array. shape : sequence of ints The shape of the array.linalg imports most of them. but with the proper shape. whose value is the tuple of dimension names. Note that although scipy. typecode : dtype character code Desired data-type for the data array. Linear algebra (scipy. this is initialized as empty. Properties 4. Variable attributes are created by assigning to an attribute of the netcdf_variable object. Typically. This should match the lengths of the variable’s dimensions. optional Attribute values (any type) keyed by string names.10. Must be in the same order of the dimension lengths given by shape. Release 0. These attributes become attributes for the netcdf_variable object. All other attributes correspond to variable attributes deﬁned in the NetCDF ﬁle. shape Attributes dimensions isrec. netcdf_variable objects also have attribute shape with the same meaning as for arrays. See Also: isrec.

Raises LinAlgError : : If a is singular.]]) 266 Chapter 4. or not 2-dimensional.]. assuming a is a triangular matrix.inv(a.inv(a) array([[-2.inv(a)) array([[ 1.. lower. overwrite_a. overwrite_b]) pinv(a[.SciPy Reference Guide. . k]) Compute the inverse of a matrix. Construct a copy of a matrix with elements below the k-th diagonal zeroed.9. [3. overwrite_a]) solve(a. assuming a is banded matrix. b[. ]. Kronecker product of a and b. Solve equation a x = b. overwrite_a=False) Compute the inverse of a matrix. overwrite_ab. k]) triu(m[. lower. sym_pos.. Default is False. . cond...]) det(a[. Compute the determinant of a matrix Matrix or vector norm. . ord]) lstsq(a.. 0. cond. b[.. rcond]) kron(a.5. Compute the (Moore-Penrose) pseudo-inverse of a matrix.]]) >>> sp.]. b) tril(m[.linalg. overwrite_a]) norm(a[.. Examples >>> a = np. 1..10. ab.. ValueError : : If a is not square.1 Basics inv(a[.array([[1.. Compute least-squares solution to equation Ax = b. Solve the equation a x = b for x Solve the equation a x = b for x. Reference .0rc1 4. optional Discard data in a (may improve performance).dot(a. rcond]) pinv2(a[.]) solve_banded(l. [ 0. [ 1.5]]) >>> np. Release 0. 1.linalg. trans.. Parameters a : array_like Square matrix to be inverted. b[. Returns ainv : ndarray Inverse of the matrix a. overwrite_ab. Solve the equation a x = b for x. 2. b[. b[.. -0. .linalg. Construct a copy of a matrix with elements above the k-th diagonal zeroed. overwrite_a. Compute the (Moore-Penrose) pseudo-inverse of a matrix. overwrite_a : bool. 4. sp. . cond. scipy.]) solve_triangular(a.]) solveh_banded(ab.

debug=False) Solve the equation a x = b for x. overwrite_a : boolean Allow overwriting data in a (may enhance performance) overwrite_b : boolean Allow overwriting data in b (may enhance performance) Returns x : array.6). b. j] == a[i. shape (M. lower=False.linalg. if sym_pos is true. u).j.) or (M. N) depending on b Solution to the system a x = b Raises LinAlgError if a is singular : scipy. shape (l+u+1. shape (M. debug=False) Solve the equation a x = b for x Parameters a : array.9.j] Example of ab (shape of a is (6.linalg. N) sym_pos : boolean Assume a is symmetric and positive deﬁnite lower : boolean Use only data contained in the lower triangle of a. integer) Number of non-zero lower and upper diagonals ab : array. The matrix a is stored in ab using the matrix diagonal ordered form: ab[u + i . assuming a is banded matrix. l=2): * a00 a10 a20 a01 a11 a21 a31 a12 a22 a32 a42 a23 a33 a43 a53 a34 a44 a54 * a45 a55 * * Parameters (l. M) Banded matrix b : array. sym_pos=False.0rc1 scipy. Linear algebra (scipy. u=1. shape (M. shape (M. overwrite_ab=False.10. M) b : array. K) Right-hand side overwrite_ab : boolean Discard data in ab (may enhance performance) overwrite_b : boolean Discard data in b (may enhance performance) 4.) or (M. ab. overwrite_b=False. overwrite_a=False. Release 0.) or (M.SciPy Reference Guide. overwrite_b=False.linalg) 267 . Default is to use upper triangle. u) : (integer. b.solve_banded((l.solve(a.

shape (M.linalg. i >= j) Example of ab (shape of a is (6. shape (M.10. ‘C’} unit_diagonal=False.) or (M. a is Hermitian positive-deﬁnite banded matrix. lower=False) Solve equation a x = b.solveh_banded(ab.) or (M. ‘N’. j] == a[i. debug=False) Solve the equation a x = b for x. overwrite_b=False.) or (M. assuming a is a triangular matrix.j. ‘T’.0rc1 Returns x : array. lower=False. write_b=False.SciPy Reference Guide. trans=0.j] (if upper form. 2.j] (if lower form. i <= j) ab[ i . M) Banded matrix b : array. The matrix a is stored in ab either in lower diagonal or upper diagonal ordered form: ab[u + i . (Default is upper form) Returns x : array. K) The solution to the system a x = b scipy. K) Right-hand side overwrite_ab : boolean Discard data in ab (may enhance performance) overwrite_b : boolean Discard data in b (may enhance performance) lower : boolean Is the matrix in the lower form. Parameters a : array. trans : {0. 1. Release 0. Parameters ab : array. b. Reference . shape (u + 1. N) lower : boolean Use only data contained in the lower triangle of a.linalg. shape (M. Default is to use upper triangle. over- 268 Chapter 4. K) The solution to the system a x = b scipy. b. shape (M. shape (M.j. j] == a[i. overwrite_ab=False. M) b : array.6). u=2): upper form: a02 a13 a24 a35 * * a01 a12 a23 a34 a45 * a00 a11 a22 a33 a44 a55 lower form: a00 a11 a22 a33 a44 a55 a10 a21 a32 a43 a54 * a20 a31 a42 a53 * * Cells marked with * are not used.) or (M.solve_triangular(a.

ord : {non-zero int. diagonal elements of A are assumed to be 1 and will not be referenced. inf means numpy’s inf object.0. shape (M. shape (M. scipy. Returns n : ﬂoat Norm of the matrix or vector. depending on the value of the ord parameter. Parameters x : array_like. LAPACK routine z/dgetrf.linalg. optional Order of the norm (see table under Notes). N) depending on b Solution to the system a x = b Raises LinAlgError : If a is singular Notes New in version 0. overwrite_a=False) Compute the determinant of a matrix Parameters a : array. or one of an inﬁnite number of vector norms (described below). ord=None) Matrix or vector norm. overwrite_b : boolean Allow overwriting data in b (may enhance performance) Returns x : array. ‘fro’}.) or (M.SciPy Reference Guide. shape (M.linalg.det(a. N) Input array.9. Release 0.) or (M. scipy. -inf. This function is able to return one of seven different matrix norms.norm(a.10.linalg) 269 .0rc1 Type of system to solve: trans 0 or ‘N’ 1 or ‘T’ 2 or ‘C’ system ax=b a^T x = b a^H x = b unit_diagonal : boolean If True. 4.9. Linear algebra (scipy. M) Returns det : ﬂoat or complex Determinant of a Notes The determinant is computed via LU factorization. inf.

j abs(ai.norm(b. 0./ord) The Frobenius norm is given by [R32]: ||A||F = [ References [R32] Examples >>> from numpy import linalg as LA >>> a = np. axis=0)) min(sum(abs(x). The following norms can be calculated: ord None ‘fro’ inf -inf 0 1 -1 2 -2 other norm for matrices Frobenius norm Frobenius norm max(sum(abs(x). -1. value) smallest singular value – norm for vectors 2-norm – max(abs(x)) min(abs(x)) sum(x != 0) as below as below as below as below sum(abs(x)**ord)**(1.norm(a.inf) 9 >>> LA. [ 2.745966692414834 >>> LA.6566128774142013e-010 i. >>> b = a. 3)) >>> b array([[-4. 4]) 270 Chapter 4.SciPy Reference Guide. ’fro’) 7. Release 0.norm(a. 4]]) >>> LA. -np.inf) 2 >>> LA.norm(b. 1) 7 >>> LA.norm(a.norm(a) 7. np.norm(b) 7. np. 1. Reference . 0. the result is. 3. -3. axis=0)) 2-norm (largest sing. not a mathematical ‘norm’.norm(b.norm(b. -2. axis=1)) – max(sum(abs(x).inf) 4 >>> LA.norm(a. -2]. [-1. 1) 20 >>> LA.745966692414834 >>> LA.reshape((3.745966692414834 >>> LA. 2.j )2 ]1/2 3.arange(9) .10. but it may still be useful for various numerical purposes.0rc1 Notes For values of ord <= 0. strictly speaking. 1]. -np. axis=1)) min(sum(abs(x).inf) 0 >>> LA. -3.4 >>> a array([-4. -1) -4.

Linear algebra (scipy.SciPy Reference Guide. overwrite_b=False) Compute least-squares solution to equation Ax = b. The condition number of a is abs(s[0]/s[-1]). -1) 6 >>> LA. s : array.linalg.) Sums of residues. optional Cutoff for ‘small’ singular values. b. residues : ndarray. K) Right hand side matrix or vector (1-D or 2-D array). optional Discard data in a (may enhance performance).A x| is minimized.norm(b.) or (N.norm(a. cond=None.10. rank : int Effective rank of matrix a. 2) 7. -2) 1. -3) nan scipy.9. Parameters a : array. shape (M. Raises LinAlgError : : 4. squared 2-norm for each column in b .) or (K. shape (N. shape () or (1. Release 0. shape (M.8570331885190563e-016 >>> LA.745966692414834 >>> LA. this is an (1.a x.) or (M. overwrite_a : bool. 3) 5. -2) nan >>> LA. Default is False.lstsq(a.3484692283495345 >>> LA.norm(a. used to determine effective rank of a.8480354764257312 >>> LA.linalg) 271 . otherwise the shape is (K. N) Left hand side matrix (2-D array).norm(b.N). If rank of matrix a is < N or > M this is an empty array. b : array. Returns x : array.) Singular values of a.) shape array. If b was 1-D. Compute a vector x such that the 2-norm |b . optional Discard data in b (may enhance performance).norm(a. cond : ﬂoat. shape (min(M. K) depending on shape of b Least-squares solution.). Singular values smaller than rcond * largest_singular_value are considered zero.norm(b. Default is False.norm(a.0rc1 >>> LA. overwrite_b : bool. overwrite_a=False. 2) 7.

If None or -1.nnls linear least squares with non-negativity constraint scipy.linalg.0rc1 If computation does not converge.linalg. N) Matrix to be pseudo-inverted cond. rcond=None) Compute the (Moore-Penrose) pseudo-inverse of a matrix. Singular rcond*largest_singular_value are considered zero. shape (M. a))) True >>> allclose(B. Release 0. Parameters a : array. Reference . shape (N. 6) >>> B = linalg. Singular values smaller than rcond*largest_singular_value are considered zero. cond=None.pinv(a. M) Raises LinAlgError if SVD computation does not converge : values smaller than 272 Chapter 4.randn(9.pinv(a) >>> allclose(a. shape (M. dot(a. dot(B. dot(B. Returns B : array. Calculate a generalized inverse of a matrix using a least-squares solver. See Also: optimize. Parameters a : array. cond=None. suitable machine precision is used. dot(a.pinv2(a.10.SciPy Reference Guide. N) Matrix to be pseudo-inverted cond. Calculate a generalized inverse of a matrix using its singular-value decomposition and including all ‘large’ singular values. rcond=None) Compute the (Moore-Penrose) pseudo-inverse of a matrix. Returns B : array. M) Raises LinAlgError if computation does not converge : Examples >>> from numpy import * >>> a = random. rcond : ﬂoat or None Cutoff for ‘small’ singular values. B))) True scipy. rcond : ﬂoat Cutoff for ‘small’ singular values in the least-squares solver. shape (N.

. [ 4.12]]. dot(a.-1]*b a[-1. shape (M. dot(B. 4]]) scipy. a[1. k=0) Construct a copy of a matrix with elements above the k-th diagonal zeroed.2.5.linalg) 273 . 3. k < 0 subdiagonal and k > 0 superdiagonal.. a))) True >>> allclose(B. 1.SciPy Reference Guide. -1) array([[ 0.. a[-1. Returns A : array.linalg import tril >>> tril([[1. 6) >>> B = linalg.tril(m.9. 4..[10. a[0. array >>> kron(array([[1. shape m.. dot(a. 11. Release 0. [10. a[-1.8. b) Kronecker product of a and b. 4.4]]).dtype Examples >>> from scipy. dtype m.0]*b .1]])) array([[1. 2.pinv2(a) >>> allclose(a. 0.0rc1 Examples >>> from numpy import * >>> a = random. N*Q) Kronecker product of a and b Examples >>> from scipy import kron.randn(9. Linear algebra (scipy. 2]. The result is the block matrix: a[0.. [3.linalg.9].-1]*b . 0].2]. 0].0]*b a[1. 1. N) b : array. [ 7. B))) True scipy. shape (P. Q) Returns A : array.[3. dot(B.0]*b a[0.3].1. 2. 12]]) 4.6]. array([[1.[4.10.1]*b ..-1]*b Parameters a : array.shape.[7..linalg. Parameters m : array Matrix whose elements to return k : integer Diagonal above which to zero elements. 3.11. 0]. 8.1]*b a[1. k == 0 is the main diagonal. 0.kron(a. shape (M*P.1]*b .

6].eig(a. [ 0.i] where .]) eigvalsh(a[. shape (M. lower. k < 0 subdiagonal and k > 0 superdiagonal. Reference .]) eig_banded(a_band[. 5.. shape m. lower. left : boolean Whether to calculate and return left eigenvectors 274 Chapter 4. dtype m.11.9].. overwrite_a. b.8. right. left.i] a.SciPy Reference Guide.linalg import tril >>> triu([[1. -1) array([[ 1. Find eigenvalues w and right or left eigenvectors of a general matrix: a vr[:.i] = w[i]. [ 0.linalg. b.[7. lower. 8. right=True.. . b=None. overwrite_a.6].5.[10.triu(m. overwrite_a=False.i] = w[i] b vr[:. M) Right-hand side matrix in a generalized eigenvalue problem. k == 0 is the main diagonal. M) A complex or real matrix whose eigenvalues and eigenvectors will be computed.0rc1 scipy.]) eigvals_banded(a_band[. 9]. Parameters a : array. Release 0. shape (M. Solve real symmetric or complex hermitian band matrix eigenvalue problem.H is the Hermitean conjugation. left=False.H vl[:.linalg. k=0) Construct a copy of a matrix with elements below the k-th diagonal zeroed. If omitted. Compute eigenvalues from an ordinary or generalized eigenvalue problem. overwrite_b=False) Solve an ordinary or generalized eigenvalue problem of a square matrix. b. overwrite_a]) eigh(a[. identity matrix is assumed. 3]. Parameters m : array Matrix whose elements to return k : integer Diagonal below which to zero elements.dtype Examples >>> from scipy. 0.. b.3]. . scipy..... .]) eigvals(a[.H vl[:.10. 2.. .2. . [ 4.12]].2 Eigenvalue Problems eig(a[.conj() b. Solve an ordinary or generalized eigenvalue problem for a complex Solve an ordinary or generalized eigenvalue problem for a complex Solve real symmetric or complex hermitian band matrix eigenvalue problem.[4.]) Solve an ordinary or generalized eigenvalue problem of a square matrix. eigvals_only. eigvals_only. lower.shape.9. b : array.. 12]]) 4. Returns A : array.

i] = w[i] b vr[:. Release 0. Linear algebra (scipy. each repeated according to its multiplicity. shape (M. shape (M.9.i]. (if right == True) : vr : double or complex array. each repeated according to its multiplicity. shape (M. shape (M. overwrite_a : boolean Whether to overwrite data in a (may improve performance) Returns w : double or complex array. Find eigenvalues of a general matrix: a vr[:.i] Parameters a : array. (if left == True) : vl : double or complex array. overwrite_a=False) Compute eigenvalues from an ordinary or generalized eigenvalue problem. Raises LinAlgError if eigenvalue computation does not converge : 4.i]. shape (M. identity matrix is assumed. If omitted.eigvals(a.0rc1 right : boolean Whether to calculate and return right eigenvectors overwrite_a : boolean Whether to overwrite data in a (may improve performance) overwrite_b : boolean Whether to overwrite data in b (may improve performance) Returns w : double or complex array. but not in any speciﬁc order. M) The normalized right eigenvector corresponding to the eigenvalue w[i] is the column vr[:.) The eigenvalues. M) A complex or real matrix whose eigenvalues and eigenvectors will be computed. b : array.linalg) 275 .) The eigenvalues.SciPy Reference Guide. b=None. M) The normalized left eigenvector corresponding to the eigenvalue w[i] is the column v[:.linalg. shape (M. Raises LinAlgError if eigenvalue computation does not converge : See Also: eigh eigenvalues and right eigenvectors for symmetric/Hermitian arrays scipy. M) Right-hand side matrix in a generalized eigenvalue problem.10.

linalg. (Default: both are calculated) turbo : boolean Use divide and conquer algorithm (faster but expensive in memory.i] v[i. Reference .i] overwrite_a : boolean Whether to overwrite data in a (may improve performance) overwrite_b : boolean 276 Chapter 4. b=None.conj() b v[:. If omitted. eigvals_only=False. turbo=True. lower : boolean Whether the pertinent array data is taken from the lower or upper triangle of a. M) A complex Hermitian or real symmetric deﬁnite positive matrix in.i] = w[i] v[i.eigh(a. where b is positive deﬁnite: a v[:. shape (M. shape (M. hi) Indexes of the smallest and largest (in ascending order) eigenvalues and corresponding eigenvectors to be returned: 0 <= lo < hi <= M-1.i] = w[i] v[:.:]. Release 0.SciPy Reference Guide.i] type = 2: a b v[:. b : array. M) A complex Hermitian or real symmetric matrix whose eigenvalues and eigenvectors will be computed. overwrite_b=False.i] = w[i] b v[:.conj() a v[:. If omitted.0rc1 See Also: eigvalsh eigenvalues of symmetric or Hemitiean arrays eig eigenvalues and right eigenvectors of general arrays eigh eigenvalues and eigenvectors of symmetric/Hermitean arrays. Find eigenvalues w and optionally eigenvectors v of matrix a. overwrite_a=False. identity matrix is assumed. (Default: lower) eigvals_only : boolean Whether to calculate only eigenvalues and no eigenvectors.:]. eigvals=None. scipy.i] = w[i] b v[:. type: integer : Speciﬁes the problem type to be solved: type = 1: a v[:. all eigenvalues and eigenvectors are returned.i] = w[i] v[:. lower=True. type=1) Solve an ordinary or generalized eigenvalue problem for a complex Hermitian or real symmetric matrix.10.i] = 1 Parameters a : array.i] type = 3: b a v[:. only for generalized eigenvalue problem and if eigvals=None) eigvals : tuple (lo.

If omitted.linalg. Release 0. : an error occurred.conj() b v[:. Normalization: type 1 and 3: v. where b is positive deﬁnite: a v[:. b=None.conj() a v[:. (Default: lower) turbo : boolean Use divide and conquer algorithm (faster but expensive in memory. Find eigenvalues w of matrix a. M) A complex Hermitian or real symmetric matrix whose eigenvalues and eigenvectors will be computed.9. hi) 4. no error is reported : but results will be wrong.i] = 1 Parameters a : array. b : array.0rc1 Whether to overwrite data in b (may improve performance) Returns w : real array. eigvals=None. turbo=True.linalg) 277 . shape (N.10. lower=True. Linear algebra (scipy.SciPy Reference Guide. lower : boolean Whether the pertinent array data is taken from the lower or upper triangle of a.i]. M) A complex Hermitian or real symmetric deﬁnite positive matrix in.) The N (1<=N<=M) selected eigenvalues. overwrite_a=False. type=1) Solve an ordinary or generalized eigenvalue problem for a complex Hermitian or real symmetric matrix. each repeated according to its multiplicity.conj() a inv(v) = w type = 1 or 2: v.i] = w[i] b v[:.eigvalsh(a. shape (M. in ascending order.conj() a v = w type 2: inv(v). N) The normalized selected eigenvector corresponding to the eigenvalue w[i] is the column v[:.conj() b v = I type = 3 : v. or b matrix is not deﬁnite positive.i] v[i. identity matrix is assumed. shape (M.i] = w[i] v[i.:]. (if eigvals_only == False) : v : complex array. shape (M. : See Also: eig eigenvalues and right eigenvectors for non-symmetric arrays scipy. overwrite_b=False. Note that : if input matrices are not symmetric or hermitian.:]. only for generalized eigenvalue problem and if eigvals=None) eigvals : tuple (lo.conj() inv(b) v = I Raises LinAlgError if eigenvalue computation does not converge.

Note that : if input matrices are not symmetric or hermitian. lower=False.10. select_range=None. overwrite_a_band=False. i <= j) ab[ i . j] == a[i.i] = w[i] v[:. u=2): upper form: a02 a13 a24 a35 * * a01 a12 a23 a34 a45 * a00 a11 a22 a33 a44 a55 lower form: 278 Chapter 4.linalg.i] = w[i] b v[:.j] (if lower form. max_ev=0) Solve real symmetric or complex hermitian band matrix eigenvalue problem.6). j] == a[i. Find eigenvalues w and optionally right eigenvectors v of a: a v[:.i] type = 2: a b v[:. or b matrix is not deﬁnite positive. no error is reported : but results will be wrong. in ascending order.i] overwrite_a : boolean Whether to overwrite data in a (may improve performance) overwrite_b : boolean Whether to overwrite data in b (may improve performance) Returns w : real array.eig_banded(a_band. If omitted.) The N (1<=N<=M) selected eigenvalues.i] = w[i] v[:.SciPy Reference Guide.j. all eigenvalues and eigenvectors are returned.j.i] v. each repeated according to its multiplicity. type: integer : Speciﬁes the problem type to be solved: type = 1: a v[:. i >= j) Example of ab (shape of a is (6. : See Also: eigvals eigenvalues of general arrays eigh eigenvalues and right eigenvectors for symmetric/Hermitian arrays eig eigenvalues and right eigenvectors for non-symmetric arrays scipy.i] type = 3: b a v[:.0rc1 Indexes of the smallest and largest (in ascending order) eigenvalues and corresponding eigenvectors to be returned: 0 <= lo < hi <= M-1. select=’a’. : an error occurred.j] (if upper form.i] = w[i] v[:. eigvals_only=False. Reference . Release 0.H v = identity The matrix a is stored in ab either in lower diagonal or upper diagonal ordered form: ab[u + i . shape (N. Raises LinAlgError if eigenvalue computation does not converge.

select_range=None) Solve real symmetric or complex hermitian band matrix eigenvalue problem. j] == a[i. For other values of select. Find eigenvalues w of a: a v[:. in ascending order. Release 0. (Default: calculate also eigenvectors) overwrite_a_band: : Discard data in a_band (may enhance performance) select: {‘a’. v : double or complex double array. i <= j) ab[ i . max) Range of selected eigenvalues max_ev : integer For select==’v’. Raises LinAlgError if eigenvalue computation does not converge : scipy. i >= j) 4. lower=False. (Default is upper form) eigvals_only : boolean Compute only the eigenvalues and no eigenvectors. maximum number of eigenvalues expected. In doubt.SciPy Reference Guide.H v = identity The matrix a is stored in ab either in lower diagonal or upper diagonal ordered form: ab[u + i . Returns w : array.j.i] = w[i] v[:.i] v.) The eigenvalues. ‘i’} : Which eigenvalues to calculate select ‘a’ ‘v’ ‘i’ calculated All eigenvalues Eigenvalues in the interval (min. max] Eigenvalues with indices min <= i <= max select_range : (min.9. leave this parameter untouched.j] (if upper form. shape (M. select=’a’. shape (M.j] (if lower form.linalg. overwrite_a_band=False. shape (M. j] == a[i. each repeated according to its multiplicity. M) The normalized eigenvector corresponding to the eigenvalue w[i] is the column v[:.j. ‘v’.10. Parameters a_band : array.eigvals_banded(a_band. Linear algebra (scipy. u+1) Banded matrix whose eigenvalues to calculate lower : boolean Is the matrix in the lower form. has no meaning.i].linalg) 279 .0rc1 a00 a11 a22 a33 a44 a55 a10 a21 a32 a43 a54 * a20 a31 a42 a53 * * Cells marked with * are not used.

in ascending order. ‘v’.6). max) Range of selected eigenvalues Returns w : array. shape (M.10. Parameters a_band : array. max] Eigenvalues with indices min <= i <= max select_range : (min. shape (M. u+1) Banded matrix whose eigenvalues to calculate lower : boolean Is the matrix in the lower form. Reference . Raises LinAlgError if eigenvalue computation does not converge : See Also: eig_banded eigenvalues and right eigenvectors for symmetric/Hermitian band matrices eigvals eigenvalues of general arrays eigh eigenvalues and right eigenvectors for symmetric/Hermitian arrays eig eigenvalues and right eigenvectors for non-symmetric arrays 280 Chapter 4. each repeated according to its multiplicity.) The eigenvalues.SciPy Reference Guide.0rc1 Example of ab (shape of a is (6. ‘i’} : Which eigenvalues to calculate select ‘a’ ‘v’ ‘i’ calculated All eigenvalues Eigenvalues in the interval (min. (Default is upper form) overwrite_a_band: : Discard data in a_band (may enhance performance) select: {‘a’. Release 0. u=2): upper form: a02 a13 a24 a35 * * a01 a12 a23 a34 a45 * a00 a11 a22 a33 a44 a55 lower form: a00 a11 a22 a33 a44 a55 a10 a21 a32 a43 a54 * a20 a31 a42 a53 * * Cells marked with * are not used.

shape (M. overwrite_b]) cho_solve_banded(cb. N) Upper triangular or trapezoidal matrix 4.9.9. K) Lower triangular or trapezoidal matrix with unit diagonal. N) Array to decompose permute_l : boolean Perform the multiplication P*L (Default: do not permute) overwrite_a : boolean Whether to overwrite data in a (may improve performance) Returns (If permute_l == False) : p : array. lower. overwrite_a]) cho_solve(c. permute_l. M. lwork. overwrite_a]) lu_factor(a[. overwrite_a]) svdvals(a[.N. L lower triangular with unit diagonal elements. mode. Linear algebra (scipy. calc_q. b[. Compute Schur decomposition of a matrix. to use in cho_solve Solve the linear equations A x = b.SciPy Reference Guide. given the Cholesky factorization of A. Release 0. Construct the sigma matrix in SVD from singular values and size M. overwrite_b]) qr(a[. Compute Hessenberg form of a matrix. overwrite_a]) Compute pivoted LU decompostion of a matrix. b[. overwrite_a=False) Compute pivoted LU decompostion of a matrix. lower. overwrite_a]) cholesky_banded(ab[. a x = b. N) orth(A) cholesky(a[.10. given the LU factorization of a Singular Value Decomposition. overwrite_a]) lu_solve(lu. Construct an orthonormal basis for the range of A using SVD Compute the Cholesky decomposition of a matrix. output. Convert real Schur form to complex Schur form.linalg) 281 . Compute singular values of a matrix. lower]) cho_factor(a[. Solve the linear equations A x = b. Cholesky decompose a banded Hermitian positive-deﬁnite matrix Compute the Cholesky decomposition of a matrix. overwrite_ab.0rc1 4. Compute QR decomposition of a matrix. scipy. shape (M. overwrite_a.3 Decompositions lu(a[. shape (M. K = min(M. permute_l=False. overwrite_b]) svd(a[. Solve an equation system. given the Cholesky factorization of A. M) Permutation matrix l : array. overwrite_a]) diagsvd(s. lwork. compute_uv.linalg. Compute pivoted LU decomposition of a matrix. The decomposition is: A = P L U where P is a permutation matrix. shape (K. pivoting]) schur(a[. trans. overwrite_a. Parameters a : array. sort]) rsf2csf(T.lu(a. N) u : array. full_matrices. and U upper triangular. Z) hessenberg(a[. b[.

piv : array. and L in its lower triangle. overwrite_a=False) Compute pivoted LU decomposition of a matrix. scipy. N) Upper triangular or trapezoidal matrix Notes This is a LU factorization routine written for Scipy. Reference . M) Matrix to decompose overwrite_a : boolean Whether to overwrite data in A (may increase performance) Returns lu : array. 1. 2} 282 Chapter 4. Parameters a : array. The decomposition is: A = P L U where P is a permutation matrix. piv). The unit diagonal elements of L are not stored. K = min(M. shape (M. N) Matrix containing U in its upper triangle.linalg. shape (N. b.SciPy Reference Guide.) Pivot indices representing the permutation matrix P: row i of matrix was interchanged with row piv[i]. overwrite_b=False) Solve an equation system. trans=0. shape (K. and U upper triangular. scipy.0rc1 (If permute_l == True) : pl : array. as given by lu_factor b : array Right-hand side trans : {0.lu_solve((lu. L lower triangular with unit diagonal elements. given the LU factorization of a Parameters (lu. piv) : Factorization of the coefﬁcient matrix a. See Also: lu_solve solve an equation system using the LU factorization of a matrix Notes This is a wrapper to the *GETRF routines from LAPACK.lu_factor(a. shape (M. Release 0. K) Permuted L matrix. N) u : array.10. a x = b. shape (N.linalg.

Release 0.N) or (K.M) or (M.K). Vh are shaped (M. full_matrices=True. N) Vh: array. (N. the shapes are (M. sorted so that s[i] >= s[i+1].) : The singular values. N) Matrix to decompose full_matrices : boolean If true. only s is returned. overwrite_a=False) Singular Value Decomposition.linalg) 283 . compute_uv=True.svd(a.N) depending on full_matrices : For compute_uv = False. U. Linear algebra (scipy. K = min(M. given the vector s system ax=b a^T x = b a^H x = b 4. shape (M. : Raises LinAlgError if SVD computation does not converge : See Also: svdvals return singular values of a matrix diagsvd return the Sigma matrix.K) depending on full_matrices : s: array. shape (K.SciPy Reference Guide. shape (N.9. Factorizes the matrix a into two unitary matrices U and Vh and an 1d-array s of singular values (real.10. (K. Vh in addition to s (Default: true) overwrite_a : boolean Whether data in a is overwritten (may improve performance) Returns U: array.N) If false.M).linalg. shape (M.N) compute_uv : boolean Whether to compute also U.N) where K = min(M.0rc1 Type of system to solve: trans 0 1 2 Returns x : array Solution to the system See Also: lu_factor LU factorize a matrix scipy. nonnegative) such that a == U S Vh if S is an suitably shaped matrix of zeros whose main diagonal is s. Parameters a : array.

compute_uv=False) >>> allclose(s. 6) + 1j*random.shape ((9.randn(9. N) Construct the sigma matrix in SVD from singular values and size M.randn(9.svdvals(a. (6. N) Matrix to decompose overwrite_a : boolean Whether data in a is overwritten (may improve performance) Returns s: array.svd(a.10. Parameters s : array. Vh.) : The singular values. 6). given the vector s scipy.N. M.svd(a. Reference .SciPy Reference Guide. dot(U. allclose. linalg.shape ((9. 9). 6) >>> allclose(a. N) Raises LinAlgError if SVD computation does not converge : See Also: svd return the full singular value decomposition of a matrix diagsvd return the Sigma matrix. shape (K. shape (M. 6. dot >>> a = random. Vh = linalg. full_matrices=False) >>> U. Vh = linalg. shape (M. sorted so that s[i] >= s[i+1]. (6. Release 0. 6) >>> U. Vh.) Singular values M : integer N : integer Size of the matrix whose singular values are s Returns S : array.diagsvd(s. s2) True scipy. Vh))) True >>> s2 = linalg. 6).linalg. Parameters a : array. K = min(M.shape.shape.svd(a) >>> U. (6. s.) or (N. s. s.)) >>> S = linalg. s.linalg. dot(S.shape. (6. 6).shape.diagsvd(s. shape (M. overwrite_a=False) Compute singular values of a matrix.0rc1 Examples >>> from scipy import random. N) The S-matrix in the singular value decomposition 284 Chapter 4.)) >>> U.

9. :lm:‘A = L L^*‘ or :lm:‘A = U^* U‘ of a Hermitian positive-deﬁnite matrix :lm:‘A‘. i >= j) 4.j.+0. j] == a[i. Returns the Cholesky decomposition. [ 0.linalg. M) Matrix to be decomposed lower : boolean Whether to compute the upper or lower triangular Cholesky factorization (Default: upper-triangular) overwrite_a : boolean Whether to overwrite data in a (may improve performance) Returns c : array. Release 0. Parameters a : array. i <= j) ab[ i . shape (M.j].linalg. N) Returns Q : array. L. [ 0.-2j]. j] == a[i. M) Upper.[2j.orth(A) Construct an orthonormal basis for the range of A using SVD Parameters A : array.SciPy Reference Guide. overwrite_a=False) Compute the Cholesky decomposition of a matrix.+0. shape (M. K = effective rank of A.cholesky_banded(ab.j.j] (if lower form.+2. shape (M. 5.linalg. lower=True) >>> L array([[ 1.cholesky(a.+2. Linear algebra (scipy.j]]) >>> dot(L.conj()) array([[ 1.+0. lower=False.j.-2. dot >>> a = array([[1. K) Orthonormal basis for the range of A. 0.j. shape (M.j]]) scipy.or lower-triangular Cholesky factor of A Raises LinAlgError if decomposition fails : Examples >>> from scipy import array. lower=False) Cholesky decompose a banded Hermitian positive-deﬁnite matrix The matrix a is stored in ab either in lower diagonal or upper diagonal ordered form: ab[u + i .0rc1 scipy.j] (if upper form. 0.+0.10. 1.j.j].j. overwrite_ab=False.+0.linalg) 285 . as determined by automatic cutoff See Also: svd Singular value decomposition of a matrix scipy.T.5]]) >>> L = linalg.cholesky(a. linalg.

shape (M. Reference . A = L L* or A = U* U of a Hermitian positivedeﬁnite matrix a. shape (M. overwrite_a=False) Compute the Cholesky decomposition of a matrix. Release 0. to use in cho_solve Returns a matrix containing the Cholesky decomposition. (Default is upper form) Returns c : array. use the function cholesky instead. shape (u + 1. M) Matrix to be decomposed lower : boolean Whether to compute the upper or lower triangular Cholesky factorization (Default: upper-triangular) overwrite_a : boolean Whether to overwrite data in a (may improve performance) Returns c : array.10.SciPy Reference Guide. If you need to zero these entries.cho_factor(a. M) Cholesky factorization of a. M) Matrix whose upper or lower triangle contains the Cholesky factor of a. The return value can be directly used as the ﬁrst parameter to cho_solve.0rc1 Example of ab (shape of a is (6.6). in the same banded format as ab scipy. lower : boolean Flag indicating whether the factor is in the lower or upper triangle 286 Chapter 4. Warning: The returned matrix also contains random data in the entries not used by the Cholesky decomposition. u=2): upper form: a02 a13 a24 a35 * * a01 a12 a23 a34 a45 * a00 a11 a22 a33 a44 a55 lower form: a00 a11 a22 a33 a44 a55 a10 a21 a32 a43 a54 * a20 a31 a42 a53 * * Parameters ab : array. Other parts of the matrix contain random data. Parameters a : array. M) Banded matrix overwrite_ab : boolean Discard data in ab (may enhance performance) lower : boolean Is the matrix in the lower form.linalg. lower=False. shape (u+1.

Parameters (c. as given by cholesky_banded. lower must be the same value that was given to cholesky_banded.linalg. lower) : tuple. b. as given by cho_factor b : array Right-hand side Returns x : array The solution to the system A x = b See Also: cho_factor Cholesky factorization of a matrix scipy.linalg) 287 . Returns x : array The solution to the system A x = b See Also: cholesky_banded Cholesky factorization of a banded matrix 4. b : array Right-hand side overwrite_b : bool If True.10. (array. overwrite_b=False) Solve the linear equations A x = b. given the Cholesky factorization of A. overwrite_b=False) Solve the linear equations A x = b. Release 0.cho_solve_banded((cb. b. bool) Cholesky factorization of a. scipy.9. the function will overwrite the values in b. (array. given the Cholesky factorization of A. Parameters (cb. lower). lower). Linear algebra (scipy. See Also: cho_solve Solve a linear set equations using the Cholesky factorization of a matrix.cho_solve((c. lower) : tuple. bool) cb is the Cholesky factorization of A.linalg.0rc1 Raises LinAlgError : Raised if decomposition fails.SciPy Reference Guide.

If None or -1. an optimal size is computed.N). r)) True 288 Chapter 4. linalg.qr(a) >>> allclose(a.shape[1]. allclose >>> a = random. N) instead of (M. diag. P : integer ndarray Of shape (N. pivoting=False) Compute QR decomposition of a matrix. see Notes). and zgeqp3.SciPy Reference Guide. all. ‘economic’} Determines what information is to be returned: either both Q and R (‘full’.8. K) for mode=’economic’.qr(a. shape (M. or (K. M).randn(9. Calculate the decomposition :lm:‘A = Q R‘ where Q is unitary/orthogonal and R upper triangular. If pivoting. Parameters a : array.0. 6) >>> q. N) Matrix to be decomposed overwrite_a : bool.0rc1 Notes New in version 0. scipy. zgeqrf. overwrite_a=False. Returns Q : double or complex ndarray Of shape (M. lwork >= a. Examples >>> from scipy import random. N). Raises LinAlgError : Raised if decomposition fails Notes This is an interface to the LAPACK routines dgeqrf. the shapes of Q and R are (M. with K=min(M. optional Work array size. Release 0. but where P is chosen such that the diagonal of R is non-increasing. If mode=economic. or (M. only R (‘r’) or both Q and R but computed in economy-size (‘economic’. lwork=None. Not returned if mode=’r’. N) for mode=’economic’. optional Whether data in a is overwritten (may improve performance) lwork : int. mode : {‘full’. zungqr.M) and (M.) for pivoting=True. N). K) and (K.10. R : double or complex ndarray Of shape (M. K = min(M. dgeqp3. dot.N). default). Not returned if pivoting=False. r = linalg. Reference . optional Whether or not factorization should include pivoting for rank-revealing qr decomposition. dorgqr. dot(q. mode=’full’. pivoting : bool.linalg. ‘r’. compute the decomposition :lm:‘A P = Q R‘ as above.

mode=’r’) >>> allclose(r. output=’real’.shape ((9. 9).SciPy Reference Guide.0) Defaults to None (no sorting). 9). mode=’economic’) >>> q3.shape.shape ((9. dot(q4. 6). overwrite_a=False. lwork=None. lwork : integer Work array size. 6)) >>> r2 = linalg. M) Matrix to decompose output : {‘real’.shape ((9. given a eigenvalue. Release 0. (6. pivoting=True) >>> q5. r3.real < 0. 6). 4.9.shape. (9. r5.shape ((9.linalg.linalg) 289 .10. r2) True >>> q3.qr(a. r4. ‘ouc’} Speciﬁes whether the upper eigenvalues should be sorted.0) ‘iuc’ Inside the unit circle (x*x. r4)) True >>> q4. (6.schur(a.qr(a. p4. r.shape. 6)) >>> q4. If None or -1. In the quasi-triangular form. p5 = linalg. returns a boolean denoting whether the eigenvalue should be sorted to the top-left (True). (9. shape (M. overwrite_a : boolean Whether to overwrite data in a (may improve performance) sort : {None. p4 = linalg.qr(a. sort=None) Compute Schur decomposition of a matrix. p5. quasi-upper triangular. r5.)) >>> q5. 6). p4]. callable.real > 0. pivoting=True) >>> d = abs(diag(r4)) >>> all(d[1:] <= d[:-1]) True >>> allclose(a[:.qr(a. (6.shape. ‘lhp’. ‘iuc’.0rc1 >>> q. Parameters a : array.shape. Linear algebra (scipy. it is automatically computed.0) ‘rhp’ Right-hand plane (x. ‘rhp’. 6).conjugate() <= 1. (6.)) scipy. ‘complex’} Construct the real or complex Schur decomposition (for real matrices). mode=’economic’.0) ‘ouc’ Outside the unit circle (x*x. 2x2 blocks describing complex-valued eigenvalue pairs may extrude from the diagonal. The Schur decomposition is A = Z T Z^H where Z is unitary and T is either upper-triangular. string parameters may be used: ‘lhp’ Left-hand plane (x. Alternatively.shape. r4. r3 = linalg. A callable may be passed that. or for real Schur decomposition (output=’real’).conjugate() > 1.

roundoff errors caused the leading eigenvalues to no longer satisfy the sorting condition See Also: rsf2csf Convert real Schur form to complex Schur form scipy.10. the eigenvalues could not be reordered due to a failure to separate eigenvalues. M) Schur form of A. If eigenvalue sorting was requested. It is real-valued for the real Schur decomposition. It is real-valued for the real Schur decomposition.rsf2csf(T. M) Schur transformation matrix corresponding to the complex form See Also: schur Schur decompose a matrix 290 Chapter 4. shape (M. shape (M. M) An unitary Schur transformation matrix for A. shape (M. Z : array.linalg. M) Real Schur form of the original matrix Z : array. Z) Convert real Schur form to complex Schur form. shape (M. sdim : integer If and only if sorting was requested. Reference . Raises LinAlgError : Error raised under three conditions: 1. If eigenvalue sorting was requested. M) Complex Schur form of the original matrix Z : array. Parameters T : array. shape (M. The algorithm failed due to a failure of the QR algorithm to compute all eigenvalues 2. usually because of poor conditioning 3.0rc1 Returns T : array. shape (M. Release 0. Convert a quasi-diagonal real-valued Schur form to the upper triangular complex-valued Schur form.SciPy Reference Guide. a third return value will contain the number of eigenvalues satisfying the sort condition. M) Schur transformation matrix Returns T : array.

Evaluate a matrix function speciﬁed by a callable.M) Matrix to be exponentiated q : integer Order of the Pade approximation Returns expA : array.linalg. Compute the matrix sine. disp]) cosm(A) sinm(A) tanm(A) coshm(A) sinhm(A) tanhm(A) signm(a[. Compute the matrix exponential using Taylor series. shape (M. shape (M. q=7) Compute the matrix exponential using Pade approximation. The Hessenberg decomposition is A = Q H Q^H where Q is unitary/orthogonal and H has only zero elements below the ﬁrst subdiagonal.10.9. Compute the hyperbolic matrix cosine.M) Matrix to bring into Hessenberg form calc_q : boolean Whether to compute the transformation matrix overwrite_a : boolean Whether to ovewrite data in a (may improve performance) Returns H : array. Compute matrix logarithm. q]) expm2(A) expm3(A[. Matrix square root.SciPy Reference Guide. Release 0. scipy. shape(M.M) 4.t. shape (M.linalg) 291 . Compute the matrix tangent.0rc1 scipy.M) Unitary/orthogonal similarity transformation matrix s.linalg. q]) logm(A[.9. Parameters A : array. calc_q=False. overwrite_a=False) Compute Hessenberg form of a matrix. Parameters a : array.expm(A. Linear algebra (scipy. Compute the hyperbolic matrix tangent. disp]) Compute the matrix exponential using Pade approximation.M) Hessenberg form of A (If calc_q == True) : Q : array. disp]) sqrtm(A[.4 Matrix Functions expm(A[. disp]) funm(A.hessenberg(a. Compute the hyperbolic matrix sine. Compute the matrix cosine. func[. Matrix sign function. Compute the matrix exponential using eigenvalue decomposition. shape(M. A = Q H Q^H 4.

0rc1 Matrix exponential of A scipy.M) Matrix to be exponentiated q : integer Order of the Taylor series Returns expA : array.M) Matrix exponential of A scipy. The matrix logarithm is the inverse of expm: expm(logm(A)) == A Parameters A : array. shape(M.expm3(A.M) Matrix to be exponentiated Returns expA : array. Parameters A : array. shape(M. (Default: True) Returns logA : array.logm(A.10.M) Matrix logarithm of A (if disp == False) : errest : ﬂoat 1-norm of the estimated error. Release 0. shape(M. shape(M. shape(M. shape(M.SciPy Reference Guide.M) 292 Chapter 4. disp=True) Compute matrix logarithm. shape(M.cosm(A) Compute the matrix cosine.M) Matrix exponential of A scipy. Parameters A : array.expm2(A) Compute the matrix exponential using eigenvalue decomposition.linalg.linalg. q=20) Compute the matrix exponential using Taylor series. ||err||_1 / ||A||_1 scipy.M) Matrix whose logarithm to evaluate disp : boolean Print warning if error in the result is estimated large instead of returning estimated error.linalg. This routine uses expm to compute the matrix exponentials. Reference .linalg. Parameters A : array.

sinm(A) Compute the matrix sine. shape(M. This routine uses expm to compute the matrix exponentials. This routine uses expm to compute the matrix exponentials. shape(M. Parameters A : array.M) 4.sinhm(A) Compute the hyperbolic matrix sine. Parameters A : array.M) Returns sinhA : array.M) Returns tanA : array.linalg.M) Hyperbolic matrix cosine of A scipy.0rc1 Returns cosA : array. Linear algebra (scipy. shape(M.M) Matrix cosine of A scipy.tanhm(A) Compute the hyperbolic matrix tangent.M) Returns coshA : array.9. shape(M.linalg.linalg.M) Matrix tangent of A scipy. This routine uses expm to compute the matrix exponentials. Release 0. shape(M.M) Hyperbolic matrix sine of A scipy.SciPy Reference Guide.10. This routine uses expm to compute the matrix exponentials. This routine uses expm to compute the matrix exponentials. shape(M.tanm(A) Compute the matrix tangent. Parameters A : array.linalg) 293 . shape(M. Parameters A : array. shape(M.M) Returns sinA : array. Parameters A : array.coshm(A) Compute the hyperbolic matrix cosine. shape(M. shape(M.linalg.M) Matrix cosine of A scipy.linalg.

Parameters A : array. shape(M. [1.linalg import signm. Parameters A : array.M) Matrix at which to evaluate the sign function disp : boolean Print warning if error in the result is estimated large instead of returning estimated error. (Default: True) Returns sgnA : array.1]] >>> eigvals(a) array([ 4. Extension of the scalar sign(x) to matrices. [1.10.SciPy Reference Guide.1]. shape(M.+0. eigvals >>> a = [[1.63667176+0.sqrtm(A. ||err||_F / ||A||_F 294 Chapter 4.+0.2.j.76155718+0.linalg. disp=True) Matrix square root. shape(M.M) Value of the sign function at A (if disp == False) : errest : ﬂoat Frobenius norm of the estimated error. ||err||_1 / ||A||_1 Examples >>> from scipy. Release 0.M) Hyperbolic matrix tangent of A scipy.j.+0. 1.0rc1 Returns tanhA : array.j]) scipy.12488542+0. shape(M. -0.1.3]. (Default: True) Returns sgnA : array. 1.linalg. disp=True) Matrix sign function. Reference .M) Matrix whose square root to evaluate disp : boolean Print warning if error in the result is estimated large instead of returning estimated error. 0.2.j.j.j]) >>> eigvals(signm(a)) array([-1.M) Value of the sign function at A (if disp == False) : errest : ﬂoat 1-norm of the estimated error.signm(a. shape(M.

k. shape(M.funm(A. Linear algebra (scipy.SciPy Reference Guide. C. B.9. func. 0. Must be vectorized (eg.9. Construct a Hadamard matrix. scipy. up to 2-D 4. [0.M) Matrix at which to evaluate the function func : callable Callable object that evaluates a scalar function f. Construct a Hankel matrix. Release 0. M. exact]) leslie(f. Create a companion matrix. [0. shape(M. dtype]) hankel(c[.linalg. Construct a circulant matrix.M) Value of the matrix function speciﬁed by func evaluated at A (if disp == False) : errest : ﬂoat 1-norm of the estimated error.linalg) 295 . 0]. Construct (N.. Create a Hilbert matrix of order n. disp : boolean Print warning if error in the result is estimated large instead of returning estimated error. The function f is an extension of the scalar-valued function func to matrices. Parameters A : array.linalg. Given the inputs A.10. Higham scipy. . r]) hilbert(n) invhilbert(n[. Returns the value of matrix-valued function f at A. ||err||_1 / ||A||_1 4.5 Special Matrices block_diag(*arrs) circulant(c) companion(a) hadamard(n[. 0. s) toeplitz(c[. : array_like. M) matrix ﬁlled with ones at and below the k-th diagonal.. Create a Leslie matrix. using vectorize). Compute the inverse of the Hilbert matrix of order n. r]) tri(N[. the output will have these arrays arranged on the diagonal: [[A. Construct a Toeplitz matrix. B and C. disp=True) Evaluate a matrix function speciﬁed by a callable. 0]. (Default: True) Returns fA : array. B.block_diag(*arrs) Create a block diagonal matrix from provided arrays. C]] Parameters A. dtype]) Create a block diagonal matrix from provided arrays.0rc1 Notes Uses algorithm by Nicholas J.

0. . 0. D has the same dtype as A.. 7. [[4..0... 1]] >>> B = [[3.]. shape (len(c).10.linalg. [ 0.. See Also: toeplitz Toeplitz matrix hankel Hankel matrix Notes New in version 0. B. C.circulant(c) Construct a circulant matrix.8. 0]. 6. 2]..linalg import circulant >>> circulant([1. 0... 4.. 5.. Parameters c : array_like 1-D array. 2. 3. 3]. [6. len(c)) A circulant matrix whose ﬁrst column is c... 0. B. Returns D : ndarray Array with A.]]) scipy. 5]..]. 8]] >>> C = [[7]] >>> block_diag(A. Returns A : array.. [6. Release 0. [2. 296 Chapter 4.0. 3.. .]. 5]. A 1-D array or array_like sequence of length n‘is treated as a 2-D array with shape ‘‘(1. 0. C) [[1 0 0 0 0 0] [0 1 0 0 0 0] [0 0 3 4 5 0] [0 0 6 7 8 0] [0 0 0 0 0 7]] >>> block_diag(1. Notes If all the input arrays are square. on the diagonal. Examples >>> from scipy.0rc1 Input arrays. Examples >>> A = [[1. 0.. 4. [ 0...SciPy Reference Guide.. [ 0. 0. 2. array([[ 1. . [0. 0.. the output is known as a block diagonal matrix. Reference .. 0. 7]]) 0.n)‘. the ﬁrst column of the matrix. 3]) array([[1.. 7.

linalg.hadamard(n. The ﬁrst row of c is -a[1:]/a[0].]. [ 1. n must be a power of 2. Create the companion matrix [R29] associated with the polynomial whose coefﬁcients are given in a. -31. 0. Parameters n : int The order of the matrix.linalg) 297 . 0. 4..0. The data-type of the array is the same as the data-type of 1.SciPy Reference Guide.companion(a) Create a companion matrix.0rc1 [2.. hadamard(n) constructs an n-by-n Hadamard matrix. -10. 30. Parameters a : array_like 1-D array of polynomial coefﬁcients. dtype : numpy dtype The data type of the array to be constructed. [ 0. n must be a power of 2.linalg.linalg import companion >>> companion([1.. 0. and the ﬁrst sub-diagonal is all ones. Notes New in version 0. where n is the length of a.]]) scipy.0*a[0]. Returns c : ndarray A square array of shape (n-1. Linear algebra (scipy.9. Release 0. 1]]) scipy.size < 2.8. 2. 1.. 3]. n) The Hadamard matrix. 31. [3. The length of a must be at least two. -30]) array([[ 10. n-1). References [R29] Examples >>> from scipy. b) a. Returns H : ndarray with shape (n. Raises ValueError : If any of the following are true: a) a.10.ndim != 1. c) a[0] == 0. dtype=<type ‘int’>) Construct a Hadamard matrix. 1.. using Sylvester’s construction.].. and a[0] must not be zero.

-0. [4. 7.3. 1. -1]. 7.7. r=None) Construct a Hankel matrix. [ 1. Returns A : array. Whatever the actual shape of c.linalg import hankel >>> hankel([1.0rc1 Notes New in version 0. it will be converted to a 1-D array.10. If None. -1. 99]. -1. See Also: toeplitz Toeplitz matrix circulant circulant matrix Examples >>> from scipy.hankel(c.j].dtype. The Hankel matrix has constant anti-diagonals. Parameters c : array_like First column of the matrix. -1]. [3.j. 8. 1. [99. 4. 7.hilbert(n) Create a Hilbert matrix of order n. shape (len(c). 7. 1. -1.SciPy Reference Guide. len(r)) The Hankel matrix.j. 0]. [4.8. r : array_like. r[1:]]. If r is not given.7.8. 17. [ 1. 4. 7]. 1D Last row of the matrix.j]]) >>> hadamard(4) array([[ 1.+0. -1. 3. 99. [ 1. Whatever the actual shape of r.9]) array([[1. 298 Chapter 4.0. [ 1. Examples >>> hadamard(2. dtype=complex) array([[ 1.linalg. [17. Reference . 0]]) >>> hankel([1.+0. Dtype is the same as (c[0] + r[0]). 1. r[0] is ignored. 9]]) scipy. the last row of the returned matrix is [c[-1]. 2. then r = zeros_like(c) is assumed. 7]. 8].2. 7. it will be converted to a 1-D array. 4. 1].+0. r = zeros_like(c) is assumed. 0. 3. Release 0. 99]) array([[ 1. 17. -1.4]. with c as its ﬁrst column and r as its last row. [2.linalg. 1. 1]]) scipy.

Returns h : ndarray with shape (n. 240.. 1680]. exact=True) array([[ 16.9.linalg. -2700. 1200.. [ -140.. -4200. -4200. -4200]. 6480. 1680. 0. . If True.. Parameters n : int The order of the Hilbert matrix. exact : bool If False. the array is exact integer array. [ 0.invhilbert(n.33333333.ﬂoat64 is exact is False. [ 240.33333333]. 0. the data type of the array that is returned in np. 2800]]. 0. n) The Hilber matrix. -120.7] 4. 2800.. -140.int64. Examples >>> invhilbert(4) array([[ 16. To represent the exact inverse when n > 14. In the latter case. exact=False) Compute the inverse of the Hilbert matrix of order n. and the array is an approximation of the inverse. -2700. Linear algebra (scipy. -2700. [ -120.. 1680. Parameters n : int The size of the array to create. 1680.]..25 ]. -2700. Examples >>> hilbert(3) array([[ 1.j] = 1 / (i + j + 1). 0..10. n) The data type of the array is np.25 .linalg) 299 .]. -140]. Release 0. If exact is True. 240.. For n <= 14.. Notes New in version 0.0rc1 Returns the n by n array with entries h[i..int64 (for n <= 14) or object (for n > 14). 0. 0.10. [ -140.]. [ 0. Notes New in version 0. the exact inverse is returned as an array with data type np.5 . 6480. dtype=int64) >>> invhilbert(16)[7. 1200. the objects in the array will be long integers. the returned array is an object array of long integers.0. Returns invh : ndarray with shape (n.ﬂoat64. -4200.]]) >>> invhilbert(4. -120.SciPy Reference Guide.10.2 ]]) scipy.33333333.0.5 . the data type is either np. [ 240. [ -120..

age-structured population growth [R30] [R31].0rc1 4. exact=True)[7. 0.linalg. 1. 0. . 0. has to be 1-D.leslie(f. if c[0] is real. 0. 0. . the result is a Hermitian matrix. and the ﬁrst sub-diagonal.8.8. The Leslie matrix is used to model discrete-time. return the associated Leslie matrix.0. which is f. . 0. Parameters c : array_like First column of the matrix. .2475099528537506e+19 >>> invhilbert(16. 0. in this case. s : array_like The “survival” coefﬁcients. array([[ 0. r = conjugate(c) is assumed.2. 2. which give the number of offspring per-capita produced by each age class. If None. 0. has to be 1-D. Notes New in version 0. with c as its ﬁrst column and r as its ﬁrst row.8. it will be converted to a 1-D array. ]. and it must be at least 1.1 “survival coefﬁcients”. r == conjugate(c) is assumed.7. [0. s) Create a Leslie matrix. 0.0. The Toeplitz matrix has constant diagonals. r[0] is ignored. [ 0. Whatever the actual shape of c. The data-type of the array will be the data-type of f[0]+s[0].1. If r is not given. [R31] Examples >>> leslie([0. 2.7] 42475099528537378560L scipy. [ 0.linalg. 1. 0. 0. . Given the length n array of fecundity coefﬁcients f and the length n-1 array of survival coefﬁcents s. where n is the length of f. ]. Returns L : ndarray Returns a 2-D ndarray of shape (n.SciPy Reference Guide.1]. two sets of parameters deﬁne a Leslie matrix: the n “fecundity coefﬁcients”. Reference . The length of s must be one less than the length of f. 0. r : array_like First row of the matrix. which is s.2. [ 0. The array is zero except for the ﬁrst row. r=None) Construct a Toeplitz matrix.7]) 0.1. References [R30].1]. . and the n . ]]) scipy. .0. Parameters f : array_like The “fecundity” coefﬁcients. . the ﬁrst row of the 300 Chapter 4. Release 0. which give the per-capita survival rate of each age class.toeplitz(c.10. n). In a population with n age classes.

it will be converted to a 1-D array.linalg import toeplitz >>> toeplitz([1. r[1:]].9.6]) array([[1.+3. 4-1j]) array([[ 1. [ 2.j. If M is None.0.2.j.j. shape (N. Returns A : array. 5]. M=None.+0. M : integer or None The size of the second dimension of the matrix.+0.+0. shape (len(c). 4. 2.dtype. See Also: circulant circulant matrix hankel Hankel matrix Notes The behavior when c or r is a scalar. 1. 5.-3. k : integer Number of subdiagonal below which matrix is ﬁlled with ones. k=0. Examples >>> from scipy. Whatever the actual shape of r. 2.j. dtype : dtype Data type of the matrix. Returns A : array.+1.0. was changed in version 0. M) Examples 4.j]]) scipy. 2. 2+3j. 6].5.-3.SciPy Reference Guide. Release 0. [ 4.4. The behavior in previous versions was undocumented and is no longer supported.3].j]. [1.8. k = 0 is the main diagonal. The matrix has A[i.10. len(r)) The Toeplitz matrix.linalg.-1. Linear algebra (scipy.linalg) 301 .+3.0rc1 returned matrix is [c[0]. 1.tri(N. M) matrix ﬁlled with ones at and below the k-th diagonal. 4]]) >>> toeplitz([1. dtype=None) Construct (N. or when c is complex and r is None.j. 2.j. 1. Dtype is the same as (c[0] + r[0]).j] == 1 for i <= j + k Parameters N : integer The size of the ﬁrst dimension of the matrix. 4. 1. 4. M = N is assumed.j]. [2. k < 0 subdiagonal and k > 0 superdiagonal. [3.

10.model(f=None. 0]]) 4.linalg import tri >>> tri(3. 4. A base class providing generic functionality for both small and large maximum entropy models. and will be removed in 0. please ask on the scipy-dev mailing list. 302 Chapter 4. 0.11. 2. 1. -1. 1. 5. 0. 0. Reference . dtype=int) array([[0.maxentropy) Warning: This module is deprecated in scipy 0. 0.10 Maximum entropy models (scipy. 0]. A maximum-entropy (exponential-form) model on a large sample space. 1. 1. 0. [1.10. samplespace=None) A maximum-entropy (exponential-form) model on a discrete sample space. 1. [1. A conditional maximum-entropy (exponential-form) model p(x|w) on a discrete sample space. counts. class scipy. 0.0rc1 >>> from scipy. Release 0. 0. 0]. Do not use this module in your new code. dtype=int) array([[1. 0.1 Package content Models: model([f. For questions about this deprecation. 1. 1. samplespace]) bigmodel() basemodel() conditionalmodel(F.10. 1. [1.maxentropy. 0. numcontexts) A maximum-entropy (exponential-form) model on a discrete sample space. [1. 0]. 5. 1. 0]. 1]]) >>> tri(3.SciPy Reference Guide.

0rc1 Methods beginlogging clearcache crossentropy dual endlogging entropydual expectations fit grad log lognormconst logparams logpmf normconst pmf pmf_function probdist reset setcallback setfeaturesandsamplespace setparams setsmooth class scipy.maxentropy) 303 . Release 0. Maximum entropy models (scipy.10. The tails should be fatter than the model.maxentropy. Approximation is necessary when the sample space is too large to sum or integrate over in practice. 4. The model expectations are not computed exactly (by summing or integrating over a sample space) but approximately (by Monte Carlo estimation). like a continuous sample space in more than about 4 dimensions or a large discrete space like all possible sentences in a natural language.bigmodel A maximum-entropy (exponential-form) model on a large sample space.10.SciPy Reference Guide. Approximating the expectations by sampling requires an instrumental distribution that should be close to the model for fast convergence.

doctests. Release 0..]) Run tests for module using nose. This is useful for classiﬁcation problems: given the context w. class scipy. counts.maxentropy. numcontexts) A conditional maximum-entropy (exponential-form) model p(x|w) on a discrete sample space.basemodel A base class providing generic functionality for both small and large maximum entropy models.. Methods beginlogging clearcache crossentropy dual endlogging entropydual fit grad log logparams normconst reset setcallback setparams setsmooth class scipy.maxentropy.0rc1 Methods beginlogging clearcache crossentropy dual endlogging entropydual estimate expectations fit grad log lognormconst logparams logpdf normconst pdf pdf_function resample reset setcallback setparams setsampleFgen setsmooth settestsamples stochapprox test([label.10. . Reference . verbose. what is the probability of each class x? The form of such a model is: 304 Chapter 4.conditionalmodel(F.SciPy Reference Guide. extra_argv. Cannot be instantiated.

x}. f(w. theta) . theta) where Z(w. x)) / Z(w. which must be supplied to the constructor as the parameter ‘samplespace’.x} q(w.maxentropy) 305 .x) = 0 for all w.0rc1 p(x | w) = exp(theta .x) [theta .sum_{w. The sum is over all classes x in the set Y.10.x not in the training set. since q(w. x) with respect to the empirical distribution. x)). Normally the vector K = {K_i} of expectations is set equal to the expectation of f_i(w. not the entire sample space. X) where the expectation is with respect to the distribution: q(w) p(x | w) where q(w) is the empirical probability mass function derived from observations of the context w in a training set. Methods beginlogging clearcache crossentropy dual endlogging entropydual expectations fit grad log lognormconst logparams logpmf normconst pmf pmf_function probdist reset setcallback setfeaturesandsamplespace setparams setsmooth 4.SciPy Reference Guide.E f_i(X. theta) = sum_x exp(theta .x)] Note that both sums are only over the training set {w. which is deﬁned for conditional models as: L(theta) = sum_w q(w) log Z(w. theta) is a normalization term equal to: Z(w. Y) where the expectation is as deﬁned above. f(w. The partial derivatives of L are: dL / dtheta_i = K_i . This method minimizes the Lagrangian dual L of the entropy. Maximum entropy models (scipy. Such a model form arises from maximizing the entropy of a conditional model p(x | w) subject to the constraints: K_i = E f_i(W. Release 0. f(w.10.

arrayexpcomplex columnmeans(A) This is a wrapper for general dense or sparse dot products.maxentropy.innerprod(A. It is only necessary as a common interface for supporting ndarray.. whereas python’s math. We try to exponentiate with numpy.g. Returns a dense (1 x n) vector with the column averages of A.maxentropy.maxentropy. exp(-800). with python’s math.[3. (The normalization is by (m .f_m in the list f at the points x_1. It is not necessary except as a common interface for supporting ndarray.exp() just returns zero.) >>> a = numpy...x_n in the sequence ‘sample’. is a wrapper around general dense or sparse dot products.2]. which can be an (m x n) sparse or dense matrix.columnvariances(A) This is a wrapper for general dense or sparse dot products. Returns an (m x n) sparse matrix of non-zero evaluations of the scalar or vector sample[. which is much more helpful.maxentropy. format]) functions f_1.4]]. 306 Chapter 4. [3. v) This is a wrapper around general dense or sparse dot products. and PySparse arrays. and PySparse arrays. This v) logsumexp(a) Compute the log of the sum of exponentials of input elements. columnvariances(A) This is a wrapper for general dense or sparse dot products. >>> a = numpy.2]. numpy.SciPy Reference Guide. Reference . 3. v) This is a wrapper around general dense or sparse dot products.flatten(a) Flattens the sparse matrix or dense array/matrix ‘a’ into a 1-dimensional array scipy. scipy spmatrix.10... if that fails.. densefeaturematrix densefeatures dotprod flatten(a) Flattens the sparse matrix or dense array/matrix ‘a’ into a innerprod(A. ’d’) >>> columnvariances(a) array([ 2. logsumexp_naive robustlog rowmeans sample_wr sparsefeaturematrix(f. innerprodtranspose(A. and PySparse arrays. It is not necessary except as a common interface for supporting ndarray. scipy spmatrix. Release 0....]) scipy.exp() is about 10 times faster but throws an OverﬂowError exception for numerical underﬂow (e. sparsefeatures scipy.exp().’d’) >>> columnmeans(a) array([ 2.0rc1 Utilities: arrayexp(x) Returns the elementwise antilog of the real array x.exp() and. 2. Returns a dense (1 x n) vector with unbiased estimators for the column variances for each column of the (m x n) sparse or dense matrix A. scipy.maxentropy.1).4]].]) scipy.columnmeans(A) This is a wrapper for general dense or sparse dot products.arrayexp(x) Returns the elementwise antilog of the real array x.. scipy spmatrix.array([[1.array([[1.

py for a walk-through of how to use these routines when the sample space is small enough to be enumerated. and for A..f_m in the list f at the points x_1. patched for NumPy compatibility. “bigmodel”.dot(v) for dense arrays and spmatrix objects.maxentropy. Computes A^T V. scipy spmatrix. where A is a dense or sparse matrix and V is a numpy array. If format=’ll_mat’. and PySparse arrays. Release 0. and for u.sum(np. Parameters a : array_like Input array. The maximum entropy model has exponential form p (x) = exp θT f (x) Z (θ) with a real parameter vector theta of the same length as the feature statistic f(x). result) for pysparse matrices. See Also: numpy. numpy. For more background..10. Cover and Thomas (1991). 4. for example. “model”. See the ﬁle bergerexample. format=’csc_matrix’) Returns an (m x n) sparse matrix of non-zero evaluations of the scalar or vector functions f_1. If A is sparse. scipy.matvec(v.maxentropy. is available in the SciPy sandbox/pysparse directory. scipy.2 Usage information Contains two classes for ﬁtting maximum entropy models (also known as “exponential family” models) subject to linear constraints on the expectations of arbitrary feature statistics. 4. result) for PySparse matrices..maxentropy.log(np.matvec_transp(v.. the PySparse module (or a symlink to it) must be available in the Python site-packages/ directory. and uses importance sampling.. Elements of Information Theory. v) This is a wrapper around general dense or sparse dot products. It is not necessary except as a common interface for supporting ndarray.T. is for small discrete sample spaces.10. is for sample spaces that are either continuous (and perhaps high-dimensional) or discrete but too large to sum over.logsumexp(a) Compute the log of the sum of exponentials of input elements. not a matrix. This is a wrapper for A. sample.0rc1 Returns the inner product of the (m x n) dense or sparse matrix A with the n-element dense array v. Returns res : ndarray The result.logaddexp.exp(a))) calculated in a numerically more stable way. using explicit summation. This function is efﬁcient for large matrices A. scipy. One class. Maximum entropy models (scipy. A trimmed-down version..innerprodtranspose(A.sparsefeaturematrix(f.logaddexp2 Notes Numpy has a logaddexp function which is very similar to logsumexp. np.10. This is a wrapper for u.maxentropy) 307 .. see.SciPy Reference Guide. conditional Monte Carlo methods.dot(v) for dense arrays and spmatrix objects. V must be a rank-1 array..x_n in the sequence ‘sample’. The other.

high : scalar Scale max value to high. args.SciPy Reference Guide.misc) Various utilities that don’t have another home. interp]) imsave(name. Read an image ﬁle from a ﬁlename. Returns img_array : ndarray Bytescaled array.gamma(n+1). exact]) fromimage(im[. Reference . arr) imshow(arr) info([object. 4. order]) factorial(n[. n! = special. high=255. cmin : Scalar Bias scaling of small values. k[.. Release 0. ﬂatten]) imfilter(arr. toplevel]) lena() pade(an. 308 Chapter 4. Save an array as an image. n. Get help information for a function. ndiv]) comb(N.. Find the n-th derivative of a function at point x0. dx.max(). Rotate an image counter-clockwise by angle degrees. Resize an image. exact]) factorial2(n[. cmax : scalar Bias scaling of large values.min(). n(!!. output. or module. exact]) factorialk(n. Simple ﬁltering of an image. Simple showing of an image through an external viewer.0rc1 See bergerexamplesimulated. low : scalar Scale min value to low. . Double factorial. bytescale(data[. theta]) toimage(arr[. angle[. x0[. exact]) derivative(func..py for a a similar walk-through using simulation. The mode of the scipy. high. k[. Default is data. return a Pade approximation to Takes a numpy array and returns a PIL image. cmin. mode]) imrotate(arr. ftype) imread(name[.11 Miscellaneous routines (scipy. interp. Parameters data : ndarray PIL image data array.10.bytescale(data. The factorial function. maxwidth.!) = multifactorial of order k Return a copy of a PIL image as a numpy array. pal. cmin=None. class. Note that the Python Imaging Library (PIL) is not a dependency of SciPy and therefore the pilutil module is not available on systems that don’t have PIL installed. Default is data. low. high. low=0) Byte scales an array (image). Lena. cmin. m) radon(arr[. cmax=None. at 8-bit grayscale Given Taylor series coefﬁcients in an. Get classic image processing example image. Return weights for an Np-point central derivative of order ndiv The number of combinations of N things taken k at a time.misc.. size[. ﬂatten]) imresize(arr. cmax. low]) central_diff_weights(Np[.]) Byte scales an array (image). cmax.

80. [52.88878881]. [74. Examples >>> k = np. [205. then derivative is w[0] * f(x-ho*dx) + .]) >>> sc. + w[-1] * f(x+h0*dx) Notes Can be inaccurate for large number of points. [ 73. array Number of elements taken.. exact=0) The number of combinations of N things taken k at a time. Release 0. 128]]. dtype=uint8) 84. 90. 34. 4. Parameters N : int. cmax=255) array([[91. 3.91433048. low=100) array([[200.misc) 309 . then ﬂoating point precision is used. N < 0. 3.88003259. 135. 100. optional If exact is 0. 84]. 210. [140. 3. or k < 0. [180.. exact=True) 120L 4.central_diff_weights(Np. high=200. 192].39058326. Returns val : int. Miscellaneous routines (scipy.. then a 0 is returned. If weights are in the vector w. [155.06794177. [ 51. cmin=0. 70]].0rc1 Examples >>> img = array([[ 91. Notes •Array arguments accepted only for exact=0 case. scipy. •If k > N. dtype=uint8) >>> bytescale(img. exact=False) array([ 120.array([3.SciPy Reference Guide.comb(n. 10]) >>> sc.5873488 ]]) scipy. exact : int. 28]].comb(N.45808177. otherwise exact long integer is computed. array The total number of combinations.4221549 ]. k. This is often expressed as “N choose k”. 27.comb(10. 81. 188. k. 236].53875334. >>> bytescale(img) array([[255.misc. dtype=uint8) >>> bytescale(img. 225.misc. 5]. 4]) >>> n = np.10. k : int.11.array([10. 0. 34. ndiv=1) Return weights for an Np-point central derivative of order ndiv assuming equally-spaced function points. 102]. 4]. array Number of things.

the return value is 0.SciPy Reference Guide. Default is False. Default is 1. dx : int. If exact is 0. Examples >>> def x2(x): . n! = special.0. Reference . args=(). optional Spacing.misc. args : tuple. exact=0) The factorial function. order=3) Find the n-th derivative of a function at point x0. optional Order of the derivative. If n < 0. Arrays are only supported with exact set to False..0rc1 scipy.. Release 0. as an integer or a ﬂoat depending on exact. the return value is 0. return x*x ..derivative(func. n=1. optional The result can be approximated rapidly using the gamma-formula above. calculate the answer exactly using integer arithmetic. otherwise exact long integer is computed.misc. must be odd.factorial(n. Parameters n : int or array_like of ints Calculate n!. Parameters func : function Input function.gamma(n+1). Returns nf : ﬂoat or int Factorial of n. Notes Decreasing the step size too small can result in round-off error. 310 Chapter 4. optional Number of points to use. then ﬂoating point precision is used.. Given a function. optional Arguments order : int. use a central difference formula with spacing dx to compute the n-th derivative at x0. x0. •Array argument accepted only for exact=0 case. If exact is set to True. exact : bool. •If n<0.0 scipy. n : int. >>> derivative(x2. 2) 4. dx=1. x0 : ﬂoat The point at which nth derivative is found.10.

Release 0. Arrays are only supported with exact set to False.0rc1 Examples >>> arr = np.misc.factorial(arr. i. exact=1) n(!!. optional The result can be approximated rapidly using the gamma-formula above (default). exact : bool.array([3. calculate the answer exactly using integer arithmetic. If n < 0. It can be approximated n odd n even n!! = special. If exact is set to True. 120.5]) >>> sc. Miscellaneous routines (scipy. array_like Calculate multifactorial. exact=True) 105L scipy. the return value is 0. Examples >>> factorial2(7. k.!) = multifactorial of order k k times Parameters n : int. If n < 0..11. the return value is 0. 7!! numerically as: = 7 * 5 * 3 * 1. Returns nff : ﬂoat or int Double factorial of n. exact=False) array([ 6. 24.. as an int or a ﬂoat depending on exact. exact=False) array(105.misc. exact : bool.factorial2(n.00000000000001) >>> factorial2(7. calculate the answer exactly using integer arithmetic. exact=False) Double factorial.SciPy Reference Guide. Raises NotImplementedError : Raises when exact is False 4.factorial(5.]) >>> sc.. Arrays are only supported with exact set to False.factorialk(n. exact=True) 120L scipy. optional If exact is set to True. This is the factorial with every second value skipped.4..e..misc) 311 . Returns val : int Multi factorial of n.gamma(n/2+1)*2**((m+1)/2)/sqrt(pi) = 2**(n/2) * (n/2)! Parameters n : int or array_like Calculate n!!.10.

imfilter(arr.factorialk(5. 1. exact=True) 120L >>> sc. Returns fromimage : ndarray The different colour bands/channels are stored in the third dimension. Release 0. If the ﬁlter you are trying to apply is unsupported.fromimage(im. scipy.misc. ‘contour’. . Parameters im : PIL image Input image.0rc1 Examples >>> sc. ‘emboss’. ‘smooth_more’. ftype) Simple ﬁltering of an image. ‘edge_enhance_more’. scipy. ﬂatten=0) Return a copy of a PIL image as a numpy array. Raises ValueError : Unknown ﬁlter type. ﬂatten : bool. Parameters arr : ndarray The array of Image in which the ﬁlter is to be applied.misc. optional If True. exact=True) 10L scipy. ftype : str The ﬁlter that has to be applied. Returns imread : ndarray The array obtained by reading image from ﬁle name.imread(name. ‘detail’. ‘smooth’. ‘sharpen’. such that a grey-image is MxN. 3. Returns imﬁlter : ndarray The array with ﬁlter applied. an RGB-image MxNx3 and an RGBA-image MxNx4. Legal values are: ‘blur’. ﬂattens the color layers into a single gray-scale layer. ‘ﬁnd_edges’.10. 312 Chapter 4.SciPy Reference Guide. ﬂatten : bool If true. ‘edge_enhance’.misc. convert the output to grey-scale. Parameters name : str The ﬁle name to be read.factorialk(5. ﬂatten=0) Read an image ﬁle from a ﬁlename. Reference .

‘L’. ﬂoat or tuple • int . Returns imresize : ndarray The resized array of image. interp : str. Notes Interpolation methods can be: • ‘nearest’ : for nearest neighbor • ‘bilinear’ : for bilinear • ‘cubic’ : cubic • ‘bicubic’ : for bicubic 4. ‘bicubic’ or ‘cubic’). • tuple .11. • ﬂoat .SciPy Reference Guide. mode : str The PIL image mode (‘P’. Parameters arr : nd_array The array of image to be resized. scipy. scipy. etc. ‘bilinear’. Release 0. optional Interpolation Returns imrotate : nd_array The rotated array of image.misc. Parameters arr : nd_array Input array of image to be rotated.0rc1 Notes The image is ﬂattened by calling convert(‘F’) on the resulting image object. Miscellaneous routines (scipy. mode=None) Resize an image.Size of the output image. interp=’bilinear’. size.).Fraction of current size. size : int. angle. angle : ﬂoat The angle of rotation. interp : str Interpolation to use for re-sizing (‘nearest’.imrotate(arr.10. interp=’bilinear’) Rotate an image counter-clockwise by angle degrees.imresize(arr.Percentage of current size.misc.misc) 313 .

255)) x = np. maxwidth=76.zeros((255. B) and store to ﬁle: >>> >>> >>> >>> >>> rgb = np. 2] = 1 . Shape MxNx3 stores the red.arange(255). optional Input object or name to get information about.1)) >>> from scipy import misc >>> misc. MxN or MxNx3 or MxNx4 Array containing image values.arange(255) imsave(’/tmp/rgb_gradient.info(object=None.misc. Parameters object : object or str. its docstring is given. If the shape is MxN.. to view a temporary ﬁle generated from array data.. optional 314 Chapter 4. Examples Construct an array of gradient intensity values and save to ﬁle: >>> >>> >>> >>> x = np..uint8) rgb[. G. dtype=np..png’.misc.. mode ‘w’ at 0x323070>. An alpha layer may be included. 0] = np. speciﬁed as the last colour band of an MxNx4 array.imshow(arr) Simple showing of an image through an external viewer. or if that is not deﬁned then see. arr) Save an array as an image. Parameters arr : ndarray Array of image data to show. 255. information about info itself is returned... Parameters ﬁlename : str Output ﬁlename. rgb) scipy.SciPy Reference Guide.zeros((255.tile(np. Returns None : Examples >>> a = np. the array represents a grey-level image.. x) Construct an array with three colour bands (R.arange(255) imsave(’/tmp/gradient. (255.pilutil.10. If it is a string. Reference . output=<open ﬁle ‘<stdout>’. maxwidth : int. or module.0rc1 scipy.. 1] = 55 rgb[. dtype=np.uint8) x[:] = np. green and blue bands along the last dimension. 255).np. image : ndarray. available modules are searched for matching objects. class.arange(255) rgb[. If None. 3).png’.misc. toplevel=’scipy’) Get help information for a function. Uses the image viewer speciﬁed by the environment variable SCIPY_PIL_IMAGE_VIEWER.imsave(name.imshow(a) scipy.zeros((255. Release 0. If object is a numpy object.

misc >>> lena = scipy.info(’fft’) *** Found in numpy *** Core FFT routines .11. Parameters None : Returns lena : ndarray Lena image Examples >>> import scipy. Release 0. *** Repeat reference found in numpy. Examples >>> np.. Lena.dtype dtype(’int32’) 4.. n=None.lena() Get classic image processing example image. axis=-1) . optional File like object that the output is written to. lookfor Notes When used interactively with an object.misc) 315 . 512 x 512 size.. Miscellaneous routines (scipy.lena() >>> lena. 512) >>> lena.fft *** fft(a. optional Start search at this level.fft.info(np. The object has to be opened in ‘w’ or ‘a’ mode.10. See Also: source. default is stdout.misc. *** scipy.polyval) polyval(p. np. at 8-bit grayscale bit-depth..shape (512. x) Evaluate the polynomial p at x.0rc1 Printing width..fftpack *** *** Total of 3 references found. output : ﬁle like object.SciPy Reference Guide. *** Found in numpy. When using a string for object it is possible to get multiple results.. >>> np.info(obj) is equivalent to help(obj) on the Python prompt or obj? on the IPython prompt.misc.max() 245 >>> lena. . toplevel : str.

mode=None. the channel_axis argument tells which dimension of the array holds the channel data. Reference .show() 0 100 200 300 400 500 0 100 200 300 400 500 scipy. low=0. channel_axis=None) Takes a numpy array and returns a PIL image. the pal keyword.pade(an. For 2-D arrays.radon(arr.SciPy Reference Guide.toimage(arr. scipy. high=255. For 3-D arrays if one of the dimensions is 3.misc.imshow(lena) plt. cmax=None. the mode is ‘RGB’ by default or ‘YCbCr’ if selected.pyplot as plt plt. The mode of the PIL image depends on the array shape. pal=None. return a Pade approximation to the function as the ratio of two polynomials p / q where the order of q is m. 4. m) Given Taylor series coefﬁcients in an.misc. theta=None) scipy.12 Multi-dimensional image processing (scipy. if pal is a valid (N. 316 Chapter 4.ndimage) This package contains various functions for multi-dimensional image processing. unless mode is given as ‘F’ or ‘I’ in which case a ﬂoat and/or integer array is made For 3-D arrays.3) byte-array giving the RGB values (from 0 to 255) then mode=’P’.10. cmin=None. otherwise mode=’L’.0rc1 >>> >>> >>> >>> import matplotlib. Release 0.gray() plt.misc. and the mode keyword. if the The numpy array must be either 2 dimensional or 3 dimensional.

]) generic_filter1d(input..]) derivatives..ndimage. output.. output. axis. cval]) uniform_filter(input[. Calculate a one-dimensional correlation along the given axis. maximum_filter(input[. Calculates a multi-dimensional minimum ﬁlter. ..]) median_filter(input[. One-dimensional Gaussian ﬁlter.. cval]) Calculate a multidimensional laplace ﬁlter using an estimation for the second derivative based on differences. Calculate a Prewitt ﬁlter. size.]) sobel(input[..]) derivatives.]) percentile_filter(input. .. sigma[. .. Calculates a multi-dimensional percentile ﬁlter. weights[. footprint.. Calculate a multidimensional gradient magnitude using gaussian sigma[... Calculate a one-dimensional ﬁlter along the given axis. weights. Multi-dimensional Gaussian ﬁlter.filters. order. output=None.]) scipy. Calculate a multidimensional laplace ﬁlter using gaussian second output. sigma[... 4.]) correlate1d(input. size.12.]) minimum_filter1d(input.. rank[.]) convolve1d(input. generic_laplace(input. mode. . size. laplace(input[. mode. output. Calculate a gradient magnitude using the provided function for the derivative) gradient... size. size. axis. output. Calculate a one-dimensional uniform ﬁlter along the given axis. function.]) prewitt(input[. .]) maximum_filter1d(input. generic_filter(input. ... axis. . Calculates a multi-dimensional ﬁlter using the given function.. . function[. weights[. Multi-dimensional correlation.]) minimum_filter(input[.. output. output..]) derivative function... ﬁlter_size) generic_gradient_magnitude(input. mode. . Multi-dimensional image processing (scipy. . . sigma[. output.]) correlate(input. size. cval]) rank_filter(input.10. . cval=0. percentile[. output.0.. axis.ndimage. Calculate a one-dimensional convolution along the given axis. mode. size[. .]) gaussian_filter1d(input. weights[. footprint.. size[.0rc1 4. .]) gaussian_gradient_magnitude(input. mode.. weights[. .SciPy Reference Guide. . axis.. gaussian_laplace(input.]) gaussian_filter(input.. mode. size. footprint. . . Calculate a one-dimensional minimum ﬁlter along the given axis. axis.convolve(input. footprint.ndimage) 317 .1 Filters scipy.]) uniform_filter1d(input. Calculates a multi-dimensional median ﬁlter.. Calculate a multidimensional laplace ﬁlter using the provided second derivative2[. Multi-dimensional convolution.12. axis.filters convolve(input. .. Calculate a Sobel ﬁlter. Release 0... Multi-dimensional uniform ﬁlter. Calculates a multi-dimensional maximum ﬁlter. mode=’reﬂect’.. Calculate a one-dimensional maximum ﬁlter along the given axis..... origin=0) Multi-dimensional convolution. size[. axis. Calculates a multi-dimensional rank ﬁlter..

7. 11.0 is equivalent to padding the outer edge of input with 1. 4]. Default is ‘reﬂect’.0. optional The origin parameter controls the placement of the ﬁlter.0]. Default is 0. [9. 0]]) >>> k = np. [12. Default is 0. because in this case borders (i. . speciﬁed by origin in the input parameters. 0. Release 0.1. Reference . 7. optional Value to ﬁll past edges of input if mode is ‘constant’. values beyond borders are set to be cval.[1. 4]. extends beyond an edge of input.10.. 10.1].array([[1.0 origin : scalar. Notes Each value in result is Ci = j Ii+j−k Wj . 7]. cval=0. [5.0.0) array([[11. 11]. 2. same number of dimensions as input output : ndarray. where W is the weights kernel. [15. 0. 0. See Also: correlate Correlate an image with a kernel.[1.. . cval=0.SciPy Reference Guide. >>> a = np. [0. . 7]. 3..0]]) >>> from scipy import ndimage >>> ndimage. centered on any one value. 3. optional the mode parameter determines how the array borders are handled. For ‘constant’ mode. 318 Chapter 4. mode : {‘reﬂect’... optional The output parameter passes an array in which to store the ﬁlter output.’mirror’.0’s (and then extracting only the original region of the result).. 0].’constant’. cval : scalar.array([[1. [10.. 3. weights : array_like Array of weights. 12. 0]]) Setting cval=1. 14.. 0. 0. Parameters input : array_like Input array to ﬁlter.. Returns result : ndarray The result of convolution of input with weights. mode=’constant’.convolve(a. k.1. ‘wrap’}. where the weights kernel. 3.e. I is the input and k is the coordinate of the center of W.0rc1 The array is convolved with the given kernel. Examples Perhaps the simplest case to understand is mode=’constant’.’nearest’. j is the n-D spatial index over W .

0. [16. 3. axis=-1. [11. 0. mode=’reﬂect’. 10. 10]. mode=’reflect’) array([[5. 0]]) >>> k = np. >>> k = np.0.1. 4. 0]. [3. 0. 2. mode=’constant’.1. 1.1]]) >>> ndimage.[0. >>> b = np. 1. [1. cval=1.array([[0. [3.array([[2. 0]. 5]]) With mode=’reflect’ (the default). 14]. [0. 0]. 3]. k. 0]. output=None.1. 0.convolve1d(input. 6. 0. 0]. 0]]) With mode=’nearest’. 0.0. the single nearest value in to an edge in input is repeated as many times as needed to match the overlapping weights. 0.0].filters. k. 12. Release 0. 1.0]. 0]. 0].1. 0]]) This includes diagonally at the corners.ndimage) 319 . 0]. [0.0) array([[13. 0]]) >>> ndimage. k) array([[4. >>> c = np. Default is -1 output : array. 0].0]. 11. 0.ndimage. 14. optional axis of input along which to calculate. 0. 0.10. weights.convolve(b.0]. [1. [0.12. 11. [0. [3. mode=’nearest’) array([[7. 2]. The lines of the array along the given axis are convolved with the given weights.convolve(b. 0. 0.convolve(c. 0].0rc1 >>> ndimage. 0]]) >>> k = np.[0. [1. [0.convolve(a. k.array([[2. [5. 0].array([[0.SciPy Reference Guide. Multi-dimensional image processing (scipy. outer values are reﬂected at the edge of input to ﬁll in missing values. [1. 8.0]]) >>> ndimage. 1. optional The output parameter passes an array in which to store the ﬁlter output.array([[1. Parameters input : array-like input array to ﬁlter weights : ndarray one-dimensional sequence of numbers axis : integer.0. origin=0) Calculate a one-dimensional convolution along the given axis.[0. cval=0. 1]]) scipy. 7]. [0. [15. 1]. 2. 1. 1.[0.

‘wrap’}.’mirror’. Default is 0. optional The output parameter passes an array in which to store the ﬁlter output. The array is correlated with the given kernel. optional axis of input along which to calculate. where cval is the value when mode is equal to ‘constant’. origin=0) Multi-dimensional correlation.’nearest’. The lines of the array along the given axis are correlated with the given weights. optional The mode parameter determines how the array borders are handled. origin=0) Calculate a one-dimensional correlation along the given axis.correlate(input. optional Value to ﬁll past edges of input if mode is ‘constant’. Default is 0.ndimage. cval=0. mode : {‘reﬂect’.filters. optional The ‘‘origin‘‘ parameter controls the placement of the ﬁlter. Reference .filters. Release 0.’constant’.SciPy Reference Guide.’constant’. same number of dimensions as input output : array. ‘wrap’}. mode=’reﬂect’. optional The origin parameter controls the placement of the ﬁlter.0. Parameters input : array-like input array to ﬁlter weights : array one-dimensional sequence of numbers axis : integer.correlate1d(input. mode=’reﬂect’. weights.’nearest’. cval=0. axis=-1. output=None. where cval is the value when mode is equal to ‘constant’. optional The mode parameter determines how the array borders are handled. output=None. Default 0 See Also: convolve Convolve an image with a kernel. Default is ‘reﬂect’ cval : scalar. Default 0 : scipy.0 origin : scalar.’mirror’. Default is ‘reﬂect’ cval : scalar.0. Parameters input : array-like input array to ﬁlter weights : ndarray array of weights.0 origin : scalar.ndimage. optional Value to ﬁll past edges of input if mode is ‘constant’.0rc1 mode : {‘reﬂect’. scipy. weights. Default is -1 320 Chapter 4.10.

for output types with a limited precision. optional The output parameter passes an array in which to store the ﬁlter output. An order of 1. order=0.’mirror’. optional The ‘‘origin‘‘ parameter controls the placement of the ﬁlter. Default is 0. optional The mode parameter determines how the array borders are handled. optional The order of the ﬁlter along each axis is given as a sequence of integers. output=None. output=None.0 origin : scalar. Default is ‘reﬂect’ cval : scalar. Default is 0.0 Notes The multi-dimensional ﬁlter is implemented as a sequence of one-dimensional convolution ﬁlters. Multi-dimensional image processing (scipy. order : {0. mode=’reﬂect’. ‘wrap’}. The standard deviations of the Gaussian ﬁlter are given for each axis as a sequence.10. second or third derivatives of a Gaussian. ‘wrap’}. 2.ndimage) 321 . Default 0 : scipy. optional The mode parameter determines how the array borders are handled.’nearest’.0rc1 output : array.gaussian_filter(input.ndimage. where cval is the value when mode is equal to ‘constant’. mode : {‘reﬂect’.gaussian_filter1d(input. Parameters input : array-like input array to ﬁlter 4.12. mode : {‘reﬂect’. in which case it is equal for all axes. An order of 0 corresponds to convolution with a Gaussian kernel.’constant’. order=0. sigma. scipy. 3} or sequence from same set. optional Value to ﬁll past edges of input if mode is ‘constant’. the results may be imprecise because intermediate results may be stored with insufﬁcient precision. Parameters input : array-like input array to ﬁlter sigma : scalar or sequence of scalars standard deviation for Gaussian kernel.’mirror’.SciPy Reference Guide. Default is ‘reﬂect’ cval : scalar. where cval is the value when mode is equal to ‘constant’. axis=-1.’nearest’.’constant’. or as a single number.filters. 1. or 3 corresponds to convolution with the ﬁrst.0) Multi-dimensional Gaussian ﬁlter. Release 0. Higher order derivatives are not implemented output : array. 2. optional Value to ﬁll past edges of input if mode is ‘constant’. cval=0. or as a single number.0) One-dimensional Gaussian ﬁlter. The intermediate arrays are stored in the same data type as the output. sigma. cval=0. Therefore.filters. optional The output parameter passes an array in which to store the ﬁlter output. mode=’reﬂect’.ndimage.

mode : {‘reﬂect’.0) Calculate a multidimensional laplace ﬁlter using gaussian second derivatives. optional The mode parameter determines how the array borders are handled.0 scipy.’nearest’. An order of 1. Parameters input : array-like input array to ﬁlter sigma : scalar or sequence of scalars The standard deviations of the Gaussian ﬁlter are given for each axis as a sequence. Default is ‘reﬂect’ cval : scalar. sigma.’mirror’. Default is 0.SciPy Reference Guide. ‘wrap’}. ‘wrap’}. mode=’reﬂect’. where cval is the value when mode is equal to ‘constant’.10. mode : {‘reﬂect’. output=None. or as a single number..gaussian_laplace(input. Reference . output : array.gaussian_gradient_magnitude(input. 322 Chapter 4. sigma.0) Calculate a multidimensional gradient magnitude using gaussian derivatives.ndimage.ndimage. second or third derivatives of a Gaussian. output=None. optional Value to ﬁll past edges of input if mode is ‘constant’.’constant’. optional Value to ﬁll past edges of input if mode is ‘constant’.’constant’. where cval is the value when mode is equal to ‘constant’. in which case it is equal for all axes. cval=0. or as a single number. optional axis of input along which to calculate. in which case it is equal for all axes. optional The output parameter passes an array in which to store the ﬁlter output. Default is ‘reﬂect’ cval : scalar. Release 0. 1. 2. optional The mode parameter determines how the array borders are handled. Default is 0.0rc1 sigma : scalar standard deviation for Gaussian kernel axis : integer. mode=’reﬂect’.. 3}.0 scipy. cval=0. Higher order derivatives are not implemented output : array.’nearest’.’mirror’.filters. 2. optional The output parameter passes an array in which to store the ﬁlter output.filters. optional An order of 0 corresponds to convolution with a Gaussian kernel. or 3 corresponds to convolution with the ﬁrst. Parameters input : array-like input array to ﬁlter sigma : scalar or sequence of scalars The standard deviations of the Gaussian ﬁlter are given for each axis as a sequence. Default is -1 order : {0.

where cval is the value when mode is equal to ‘constant’.10. ‘wrap’}. optional Sequence of extra positional arguments to pass to passed function extra_keywords : dict. at every element position.m)). Default is ‘reﬂect’ cval : scalar. origin=0. optional The mode parameter determines how the array borders are handled.m) is equivalent to footprint=np. mode : {‘reﬂect’. Default 0 : extra_arguments : sequence. We adjust size to the number of dimensions of the input array.generic_filter(input.’constant’. ‘wrap’}.12.SciPy Reference Guide.0 origin : scalar.10.’nearest’.’mirror’. below footprint : array. optional The mode parameter determines how the array borders are handled. Thus size=(n. mode : {‘reﬂect’. footprint is a boolean array that speciﬁes (implicitly) a shape. cval=0.’constant’. Parameters input : array-like input array to ﬁlter function : callable function to apply at each element size : scalar or tuple. Default is 0.filters. optional dict of extra keyword arguments to pass to passed function 4. Default is 0. but also which of the elements within this shape will get passed to the ﬁlter function. Release 0.ndimage. size=None. optional The ‘‘origin‘‘ parameter controls the placement of the ﬁlter. if the input array is shape (10.ones((n. mode=’reﬂect’. then the actual size used is (2. Multi-dimensional image processing (scipy.2. output : array. optional See footprint. At each element the provided function is called.10).0. optional Value to ﬁll past edges of input if mode is ‘constant’.0rc1 output : array. so that. optional Value to ﬁll past edges of input if mode is ‘constant’. footprint=None. optional The output parameter passes an array in which to store the ﬁlter output. and size is 2. to deﬁne the input to the ﬁlter function. function. extra_arguments=().’mirror’. size gives the shape that is taken from the input array.0 scipy. where cval is the value when mode is equal to ‘constant’. extra_keywords=None) Calculates a multi-dimensional ﬁlter using the given function. optional The output parameter passes an array in which to store the ﬁlter output. output=None. optional Either size or footprint must be deﬁned. The input values within the ﬁlter footprint at that element are passed to the function as a 1D array of double values.2). Default is ‘reﬂect’ cval : scalar.ndimage) 323 .’nearest’.

ﬁlter_size. Reference .generic_filter1d(input. optional dict of extra keyword arguments to pass to passed function scipy. calling the given function at each line. optional Sequence of extra positional arguments to pass to passed function extra_keywords : dict.generic_gradient_magnitude(input.ndimage. mode : {‘reﬂect’.filters. optional The mode parameter determines how the array borders are handled. axis=-1. derivative. and the output line. The arguments of the line are the input line. The input line is extended appropriately according to the ﬁlter size and origin. optional The output parameter passes an array in which to store the ﬁlter output.0rc1 scipy. optional Value to ﬁll past edges of input if mode is ‘constant’. Default is ‘reﬂect’ cval : scalar. Parameters input : array-like input array to ﬁlter derivative : callable Callable with the following signature:: 324 Chapter 4. Parameters input : array-like input array to ﬁlter function : callable function to apply along given axis ﬁlter_size : scalar length of the ﬁlter axis : integer. generic_ﬁlter1d iterates over the lines of the array. extra_arguments=().0. origin=0.SciPy Reference Guide. Default 0 : extra_arguments : sequence.ndimage. output=None.0. mode=’reﬂect’. cval=0. output=None. ‘wrap’}.10. optional The ‘‘origin‘‘ parameter controls the placement of the ﬁlter. The output line must be modiﬁed in-place with the result.’nearest’.0 origin : scalar. Release 0. cval=0.’constant’. Default is -1 output : array. optional axis of input along which to calculate. mode=’reﬂect’. The input and output lines are 1D double arrays.filters. function. Default is 0. where cval is the value when mode is equal to ‘constant’. extra_keywords=None) Calculate a one-dimensional ﬁlter along the given axis.’mirror’. extra_keywords=None) Calculate a gradient magnitude using the provided function for the gradient. extra_arguments=().

optional The mode parameter determines how the array borders are handled. Default is ‘reﬂect’ cval : scalar. optional The mode parameter determines how the array borders are handled. ‘wrap’}.ndimage) 325 . optional The output parameter passes an array in which to store the ﬁlter output. where cval is the value when mode is equal to ‘constant’. optional Sequence of extra positional arguments to pass to passed function 4.0. Parameters input : array-like input array to ﬁlter derivative2 : callable Callable with the following signature:: derivative2(input. *extra_arguments. Default is 0. cval=0. extra_arguments=(). where cval is the value when mode is equal to ‘constant’. cval. optional The output parameter passes an array in which to store the ﬁlter output. axis. Note that the output from derivative is modiﬁed inplace. ‘wrap’}. Default is ‘reﬂect’ cval : scalar. derivative2.’mirror’.SciPy Reference Guide.’mirror’. optional Value to ﬁll past edges of input if mode is ‘constant’.’nearest’.filters.0rc1 derivative(input. mode=’reﬂect’. Release 0. Multi-dimensional image processing (scipy. output=None.’constant’. **extra_keywords) See extra_arguments. **extra_keywords) See extra_arguments. optional Sequence of extra positional arguments to pass to passed function scipy. output. mode : {‘reﬂect’. Default is 0. output.ndimage. extra_keywords=None) Calculate a multidimensional laplace ﬁlter using the provided second derivative function. mode : {‘reﬂect’. *extra_arguments.0 extra_keywords : dict. cval. optional dict of extra keyword arguments to pass to passed function extra_arguments : sequence. optional dict of extra keyword arguments to pass to passed function extra_arguments : sequence. be careful to copy important inputs before returning them.’constant’. mode. extra_keywords below derivative can assume that input and output are ndarrays. extra_keywords below output : array. output : array. mode.0 extra_keywords : dict. optional Value to ﬁll past edges of input if mode is ‘constant’.’nearest’.10. axis.generic_laplace(input.12.

‘wrap’}. size=None.2). size gives the shape that is taken from the input array. Release 0. footprint=None. optional The mode parameter determines how the array borders are handled.2.filters. Default is ‘reﬂect’ cval : scalar. 326 Chapter 4. optional The mode parameter determines how the array borders are handled.ones((n. Default is 0.’constant’. but also which of the elements within this shape will get passed to the ﬁlter function. output : array. to deﬁne the input to the ﬁlter function. and size is 2. optional Value to ﬁll past edges of input if mode is ‘constant’.maximum_filter1d(input.0rc1 scipy. Default 0 : scipy.0. mode : {‘reﬂect’. optional The output parameter passes an array in which to store the ﬁlter output. The lines of the array along the given axis are ﬁltered with a maximum ﬁlter of given size. Default is 0. footprint is a boolean array that speciﬁes (implicitly) a shape. where cval is the value when mode is equal to ‘constant’. mode=’reﬂect’. below footprint : array.0) Calculate a multidimensional laplace ﬁlter using an estimation for the second derivative based on differences.’constant’. We adjust size to the number of dimensions of the input array.0 scipy. Parameters input : array-like input array to ﬁlter size : scalar or tuple. origin=0) Calculates a multi-dimensional maximum ﬁlter.10. cval=0.m)).ndimage. cval=0. optional Either size or footprint must be deﬁned. origin=0) Calculate a one-dimensional maximum ﬁlter along the given axis. optional Value to ﬁll past edges of input if mode is ‘constant’. so that. axis=-1. mode : {‘reﬂect’.m) is equivalent to footprint=np. output=None. output=None. Default is ‘reﬂect’ cval : scalar.SciPy Reference Guide.’nearest’.ndimage. cval=0.0 origin : scalar.laplace(input.’nearest’. optional The output parameter passes an array in which to store the ﬁlter output.filters. at every element position. ‘wrap’}.10).’mirror’. Thus size=(n. mode=’reﬂect’. optional The ‘‘origin‘‘ parameter controls the placement of the ﬁlter.10.0.’mirror’. optional See footprint. size. output=None.ndimage.filters. if the input array is shape (10. where cval is the value when mode is equal to ‘constant’. mode=’reﬂect’.maximum_filter(input. then the actual size used is (2. Reference . Parameters input : array-like input array to ﬁlter output : array.

optional axis of input along which to calculate.’mirror’. cval=0. Release 0. and size is 2. where cval is the value when mode is equal to ‘constant’.’nearest’. footprint=None. output : array.10. but also which of the elements within this shape will get passed to the ﬁlter function. Default is 0. if the input array is shape (10.0. Default is ‘reﬂect’ cval : scalar.10. Default is 0. 4. Multi-dimensional image processing (scipy.m)). optional See footprint.’nearest’. so that. Default 0 : scipy.SciPy Reference Guide. origin=0) Calculates a multi-dimensional median ﬁlter. optional The output parameter passes an array in which to store the ﬁlter output. footprint is a boolean array that speciﬁes (implicitly) a shape. Parameters input : array-like input array to ﬁlter size : scalar or tuple. optional The mode parameter determines how the array borders are handled. optional The output parameter passes an array in which to store the ﬁlter output. at every element position.2). mode : {‘reﬂect’. optional The mode parameter determines how the array borders are handled. size=None.median_filter(input. optional Either size or footprint must be deﬁned.10). optional output=None.0 origin : scalar. optional Value to ﬁll past edges of input if mode is ‘constant’. ‘wrap’}. Default is -1 output : array.12.filters. We adjust size to the number of dimensions of the input array. where cval is the value when mode is equal to ‘constant’. optional The ‘‘origin‘‘ parameter controls the placement of the ﬁlter. mode : {‘reﬂect’.ones((n.’constant’. then the actual size used is (2.0 origin : scalar.ndimage.ndimage) 327 . Thus size=(n. size gives the shape that is taken from the input array. below footprint : array.0rc1 Parameters input : array-like input array to ﬁlter size : int length along which to calculate 1D maximum axis : integer.m) is equivalent to footprint=np. to deﬁne the input to the ﬁlter function. mode=’reﬂect’.’mirror’. optional Value to ﬁll past edges of input if mode is ‘constant’. ‘wrap’}.’constant’.2. Default is ‘reﬂect’ cval : scalar.

Thus size=(n. size. optional Value to ﬁll past edges of input if mode is ‘constant’.filters.’mirror’. mode=’reﬂect’. size gives the shape that is taken from the input array. mode : {‘reﬂect’. Default is -1 output : array. Release 0.2.m) is equivalent to footprint=np. Default 0 : scipy. The lines of the array along the given axis are ﬁltered with a minimum ﬁlter of given size. optional axis of input along which to calculate. origin=0) Calculate a one-dimensional minimum ﬁlter along the given axis. Reference . ‘wrap’}.’constant’. optional The ‘‘origin‘‘ parameter controls the placement of the ﬁlter.10. optional The output parameter passes an array in which to store the ﬁlter output. optional Either size or footprint must be deﬁned. mode=’reﬂect’.m)). and size is 2. but also which of the elements within this shape will get passed to the ﬁlter function. Parameters input : array-like input array to ﬁlter size : int length along which to calculate 1D minimum axis : integer. optional output=None.ndimage. at every element position. Default is 0. so that.filters.0 origin : scalar. optional The output parameter passes an array in which to store the ﬁlter output. to deﬁne the input to the ﬁlter function.ones((n.’constant’.10.10).ndimage. output=None.SciPy Reference Guide. Default is ‘reﬂect’ cval : scalar. output : array.’nearest’. ‘wrap’}. cval=0. footprint=None. size=None.’nearest’. 328 Chapter 4. optional See footprint. Default 0 scipy. then the actual size used is (2.2). where cval is the value when mode is equal to ‘constant’.minimum_filter1d(input. axis=-1. footprint is a boolean array that speciﬁes (implicitly) a shape.0.minimum_filter(input. mode : {‘reﬂect’. cval=0. below footprint : array. Parameters input : array-like input array to ﬁlter size : scalar or tuple.0.’mirror’. origin=0) Calculates a multi-dimensional minimum ﬁlter. We adjust size to the number of dimensions of the input array. optional The mode parameter determines how the array borders are handled.0rc1 The origin parameter controls the placement of the ﬁlter. if the input array is shape (10.

Default 0 : scipy. Default is ‘reﬂect’ cval : scalar. optional The ‘‘origin‘‘ parameter controls the placement of the ﬁlter. optional Value to ﬁll past edges of input if mode is ‘constant’. i.’constant’. origin=0) Calculates a multi-dimensional percentile ﬁlter.0 origin : scalar. optional axis of input along which to calculate. Default 0 : scipy. We adjust size to the number of dimensions of the input array.0 origin : scalar. but also which of the elements within this shape will get passed to the ﬁlter function. percentile.. Thus size=(n. output=None. at every element position. Default is 0.SciPy Reference Guide. Parameters input : array-like input array to ﬁlter percentile : scalar The percentile parameter may be less then zero. below footprint : array. optional The output parameter passes an array in which to store the ﬁlter output. ‘wrap’}. optional Value to ﬁll past edges of input if mode is ‘constant’. cval=0. cval=0. mode=’reﬂect’.filters. axis=-1. optional Either size or footprint must be deﬁned. so that. Default is 0. to deﬁne the input to the ﬁlter function. then the actual size used is (2.ndimage. output : array. percentile = -20 equals percentile = 80 size : scalar or tuple. output=None.filters.12. mode=’reﬂect’.prewitt(input.ndimage.10.10).2). footprint is a boolean array that speciﬁes (implicitly) a shape. Parameters input : array-like input array to ﬁlter axis : integer.ones((n. optional The ‘‘origin‘‘ parameter controls the placement of the ﬁlter.2. optional The mode parameter determines how the array borders are handled.0) Calculate a Prewitt ﬁlter.ndimage) 329 . Multi-dimensional image processing (scipy. and size is 2. where cval is the value when mode is equal to ‘constant’. mode : {‘reﬂect’. footprint=None. optional See footprint.10. where cval is the value when mode is equal to ‘constant’. Default is -1 4. if the input array is shape (10. size=None.’mirror’.m)). Default is ‘reﬂect’ cval : scalar. size gives the shape that is taken from the input array.e. Release 0.0.’nearest’.0rc1 The mode parameter determines how the array borders are handled.percentile_filter(input.m) is equivalent to footprint=np.

output=None. Parameters input : array-like input array to ﬁlter axis : integer. optional See footprint.e.’mirror’. Default is 0.10). optional The output parameter passes an array in which to store the ﬁlter output. i. size gives the shape that is taken from the input array. output : array. optional The mode parameter determines how the array borders are handled.2). but also which of the elements within this shape will get passed to the ﬁlter function. so that.0. ‘wrap’}. rank = -1 indicates the largest element.sobel(input. optional The mode parameter determines how the array borders are handled. mode : {‘reﬂect’.0) Calculate a Sobel ﬁlter. mode=’reﬂect’.filters.ones((n. size=None. We adjust size to the number of dimensions of the input array. footprint=None.’constant’. Default 0 : scipy. below footprint : array. then the actual size used is (2.10.0 scipy. Thus size=(n. Default is ‘reﬂect’ cval : scalar.m) is equivalent to footprint=np. Release 0. Parameters input : array-like input array to ﬁlter rank : integer The rank parameter may be less then zero. and size is 2.2. Reference . cval=0. output=None.0 origin : scalar. cval=0.’mirror’.ndimage. Default is 0. to deﬁne the input to the ﬁlter function.0rc1 output : array. origin=0) Calculates a multi-dimensional rank ﬁlter.SciPy Reference Guide.. at every element position. optional Either size or footprint must be deﬁned. rank. footprint is a boolean array that speciﬁes (implicitly) a shape. size : scalar or tuple. Default is ‘reﬂect’ cval : scalar.’nearest’.filters. if the input array is shape (10.’constant’. optional Value to ﬁll past edges of input if mode is ‘constant’. ‘wrap’}. optional 330 Chapter 4. where cval is the value when mode is equal to ‘constant’. optional The output parameter passes an array in which to store the ﬁlter output.’nearest’. mode : {‘reﬂect’. where cval is the value when mode is equal to ‘constant’. mode=’reﬂect’. axis=-1.ndimage. optional Value to ﬁll past edges of input if mode is ‘constant’.rank_filter(input.10.m)). optional The ‘‘origin‘‘ parameter controls the placement of the ﬁlter.

filters.ndimage) 331 .’mirror’.uniform_filter(input. optional The mode parameter determines how the array borders are handled.0 origin : scalar. mode=’reﬂect’.0.’constant’. ‘wrap’}. output=None.’mirror’.filters. Default is ‘reﬂect’ cval : scalar. optional Value to ﬁll past edges of input if mode is ‘constant’.’nearest’. or as a single number. where cval is the value when mode is equal to ‘constant’. optional 4.0rc1 axis of input along which to calculate. optional The mode parameter determines how the array borders are handled.uniform_filter1d(input. ‘wrap’}.ndimage. optional The ‘‘origin‘‘ parameter controls the placement of the ﬁlter. output : array. the results may be imprecise because intermediate results may be stored with insufﬁcient precision. Release 0. The intermediate arrays are stored in the same data type as the output.ndimage. optional The output parameter passes an array in which to store the ﬁlter output. in which case the size is equal for all axes.’constant’. for output types with a limited precision. Therefore. Default 0 : Notes The multi-dimensional ﬁlter is implemented as a sequence of one-dimensional uniform ﬁlters.0 scipy. Parameters input : array-like input array to ﬁlter size : int or sequence of ints The sizes of the uniform ﬁlter are given for each axis as a sequence. Default is 0. Default is -1 output : array. Default is 0. Multi-dimensional image processing (scipy. mode : {‘reﬂect’. cval=0. origin=0) Multi-dimensional uniform ﬁlter.12. optional The output parameter passes an array in which to store the ﬁlter output. output=None. The lines of the array along the given axis are ﬁltered with a uniform ﬁlter of given size. mode : {‘reﬂect’.10. size=3. origin=0) Calculate a one-dimensional uniform ﬁlter along the given axis. Default is ‘reﬂect’ cval : scalar. axis=-1. optional Value to ﬁll past edges of input if mode is ‘constant’.0.’nearest’. Parameters input : array-like input array to ﬁlter size : integer length of uniform ﬁlter axis : integer. cval=0. mode=’reﬂect’. size. where cval is the value when mode is equal to ‘constant’.SciPy Reference Guide. scipy.

output=None) Multi-dimensional ellipsoid fourier ﬁlter. Default is ‘reﬂect’ cval : scalar. axis. output]) fourier_gaussian(input.0 origin : scalar. output : ndarray. optional The mode parameter determines how the array borders are handled. shift[. n : int.fourier.10.ndimage. Default is -1 output : array.fourier fourier_ellipsoid(input. If n is larger than or equal to zero. size is the same for all axes. axis.’nearest’. n. the input is assumed to be the result of a real fft. size. axis. n. output]) Multi-dimensional ellipsoid fourier ﬁlter. Parameters input : array_like The input array. axis : int. and n gives the length of the array before transformation along the real transform direction. axis=-1. the result of ﬁltering the input is placed in this array. n=-1. Default is 0. optional If given. Multi-dimensional Gaussian fourier ﬁlter. optional Value to ﬁll past edges of input if mode is ‘constant’. If a ﬂoat. optional The axis of the real transform.ndimage. size[.fourier_ellipsoid(input. None is returned in this case.SciPy Reference Guide. n. The array is multiplied with the fourier transform of a ellipsoid of given sizes. ‘wrap’}. size has to contain one value for each axis. optional The output parameter passes an array in which to store the ﬁlter output. Multi-dimensional uniform fourier ﬁlter. where cval is the value when mode is equal to ‘constant’. size : ﬂoat or sequence The size of the box used for ﬁltering. Returns return_value : ndarray or None The ﬁltered input. None is returned. mode : {‘reﬂect’.’mirror’. optional The ‘‘origin‘‘ parameter controls the placement of the ﬁlter. scipy. Multi-dimensional fourier shift ﬁlter. Reference . sigma[. output]) fourier_shift(input.2 Fourier ﬁlters scipy. Default 0 : 4. optional If n is negative (default).’constant’.12. n. axis.0rc1 axis of input along which to calculate. 332 Chapter 4. then the input is assumed to be the result of a complex fft. If a sequence. output]) fourier_uniform(input. size[. If output is given as a parameter. Release 0.

If n is larger than or equal to zero. and n gives the length of the array before transformation along the real transform direction. optional If n is negative (default). If a sequence. sigma has to contain one value for each axis. Returns return_value : ndarray or None The ﬁltered input.10. axis=-1. n : int. output=None) Multi-dimensional fourier shift ﬁlter.fourier_gaussian(input. If a ﬂoat.fourier_shift(input. optional If n is negative (default). If a sequence.fourier. Multi-dimensional image processing (scipy.0rc1 Notes This function is implemented for arrays of rank 1.ndimage. the input is assumed to be the result of a real fft. Parameters input : array_like The input array. If n is larger than or equal to zero. optional If given. scipy. sigma is the same for all axes. n=-1.12. shift is the same for all axes. and n gives the length of the array before transformation along the real transform direction. the input is assumed to be the result of a real fft. then the input is assumed to be the result of a complex fft. axis : int. scipy.ndimage. 2. If a ﬂoat. If output is given as a parameter. or 3.fourier. Release 0. sigma. then the input is assumed to be the result of a complex fft. optional 4. None is returned in this case. shift has to contain one value for each axis. shift. axis : int. the result of ﬁltering the input is placed in this array. output : ndarray. Parameters input : array_like The input array.ndimage) 333 . shift : ﬂoat or sequence The size of the box used for ﬁltering. axis=-1. The array is multiplied with the fourier transform of a Gaussian kernel. The array is multiplied with the fourier transform of a shift operation. None is returned. output : ndarray. sigma : ﬂoat or sequence The sigma of the Gaussian kernel. optional The axis of the real transform.SciPy Reference Guide. n : int. n=-1. optional The axis of the real transform. output=None) Multi-dimensional Gaussian fourier ﬁlter.

If output is given as a parameter. mapping[. output. .]) geometric_transform(input. .. and n gives the length of the array before transformation along the real transform direction. None is returned. matrix. optional If given. optional If n is negative (default). order=3... output=None) Multi-dimensional uniform fourier ﬁlter. mode=’constant’.. order. Returns return_value : ndarray or None The shifted input..SciPy Reference Guide.3 Interpolation scipy. size is the same for all axes. None is returned in this case. size. output=None. order. axis=-1. coordinates[.affine_transform(input.. size : ﬂoat or sequence The size of the box used for ﬁltering.fourier_uniform(input. offset=0.0. 334 Chapter 4. Zoom an array. cval=0. size has to contain one value for each axis.10. output.. reshape. scipy. output : ndarray. output]) spline_filter1d(input[. mode.0. . Rotate an array.. If n is larger than or equal to zero. output]) zoom(input. the result of ﬁltering the input is placed in this array. . shift[. scipy. axis.0rc1 If given. . angle[. Calculates a one-dimensional spline ﬁlter along the given axis. If output is given as a parameter. Multi-dimensional spline ﬁlter. 4. n=-1. the result of shifting the input is placed in this array. optional The axis of the real transform. Reference . axis : int. Release 0. offset. then the input is assumed to be the result of a complex fft. . Map the input array to new coordinates by interpolation. None is returned.]) spline_filter(input[.]) shift(input..ndimage.]) rotate(input. None is returned in this case. The array is multiplied with the fourier transform of a box of given size. If a sequence.ndimage. zoom[.]) Apply an afﬁne transformation. order... Apply an arbritrary geometric transform. Shift an array.12.interpolation. preﬁlter=True) Apply an afﬁne transformation. the input is assumed to be the result of a real fft. axes. If a ﬂoat. order. output_shape=None.]) map_coordinates(input.ndimage.interpolation affine_transform(input. n : int. Returns return_value : ndarray or None The ﬁltered input. matrix[.. mode. Parameters input : array_like The input array.fourier.

cval=0. optional The parameter preﬁlter determines if the input is pre-ﬁltered with spline_ﬁlter before interpolation (necessary for spline interpolation of order > 1). In the latter case. Parameters input : array_like 4. cval : scalar. ‘reﬂect’ or ‘wrap’). None is returned. mode : str.ndimage. If output is given as a parameter. the corresponding coordinates in the input. The order has to be in the range 0-5. it is assumed that the input is already ﬁltered. offset is the same for each axis. Release 0. output_shape=None. optional Value used for points outside the boundaries of the input if mode=’constant’. If False.0. output_shape : tuple of ints. matrix : ndarray The matrix must be two-dimensional or can also be given as a one-dimensional sequence or array. The value of the input at those coordinates is determined by spline interpolation of the requested order. Parameters input : ndarray The input array.10. default is 3. output=None. scipy. for each point in the output. The given mapping function is used to ﬁnd. order : int. offset should contain one value for each axis. Multi-dimensional image processing (scipy.SciPy Reference Guide. offset : ﬂoat or sequence. Points outside the boundaries of the input are ﬁlled according to the given mode. extra_keywords={}) Apply an arbritrary geometric transform. Default is ‘constant’.geometric_transform(input.0 preﬁlter : bool.interpolation. optional The array in which to place the output. Default is True. mapping. optional The offset into the array where the transform is applied.0rc1 The given matrix and offset are used to ﬁnd for each point in the output the corresponding coordinates in the input by an afﬁne transformation. extra_arguments=(). preﬁlter=True. optional Shape tuple. Returns return_value : ndarray or None The transformed input. A more efﬁcient algorithms is then applied that exploits the separability of the problem. The value of the input at those coordinates is determined by spline interpolation of the requested order.12. optional The order of the spline interpolation. Default is 0. output : ndarray or dtype. If a ﬂoat. order=3. mode=’constant’. If a sequence. it is assumed that the matrix is diagonal.ndimage) 335 . ‘nearest’. or the dtype of the returned array. optional Points outside the boundaries of the input are ﬁlled according to the given mode (‘constant’.

0 preﬁlter : bool.187]. optional The order of the spline interpolation.geometric_transform(a. default is 3. See Also: map_coordinates.).SciPy Reference Guide. ]. . . optional Value used for points outside the boundaries of the input if mode=’constant’. If output is given as a parameter. 2. 4. extra_keywords : dict.0rc1 The input array. .0. None is returned. mode : str. 8. 6..0. >>> sp. Returns return_value : ndarray or None The ﬁltered input. shift_func) array([[ 0. or the dtype of the returned array. 0. optional Extra arguments passed to mapping. [ 0. optional Points outside the boundaries of the input are ﬁlled according to the given mode (‘constant’. output_shape : tuple of ints Shape tuple. return (output_coords[0] . 0. The order has to be in the range 0-5. cval : scalar.362. . optional The parameter preﬁlter determines if the input is pre-ﬁltered with spline_ﬁlter before interpolation (necessary for spline interpolation of order > 1).812. spline_filter1d Examples >>> a = np. output_coords[1] . 1. Release 0.263. Default is True..5. order : int. 3)) >>> def shift_func(output_coords): . If False.738]. it is assumed that the input is already ﬁltered.5) .637]]) 336 Chapter 4. and returns the corresponding input coordinates as a tuple of length equal to the input array rank.ndimage. mapping : callable A callable object that accepts a tuple of length equal to the output array rank.arange(12. 9. [ 0. Reference . Default is ‘constant’. optional Extra keywords passed to mapping. ‘nearest’. .. affine_transform. [ 0. extra_arguments : tuple..reshape((4. output : ndarray or dtype. optional The array in which to place the output.10. ‘reﬂect’ or ‘wrap’). Default is 0.

optional The array in which to place the output.ndimage. Release 0. The array of coordinates is used to ﬁnd.interpolation. output=None. 1]]. [ 3.. Parameters input : ndarray The input array.map_coordinates(input. output : ndarray or dtype.map_coordinates(a. 4. 7. 11. 5. cval : scalar. The shape of the output is derived from that of the coordinate array by dropping the ﬁrst axis.10. Default is ‘constant’. cval=0. See Also: spline_filter.interpolate Examples >>> from scipy import ndimage >>> a = np. Default is True.5. 1. 10.12. coordinates.. 8.. Default is 0. [ 6. it is assumed that the input is already ﬁltered.5. The values of the array along the ﬁrst axis are the coordinates in the input array at which the output value is found.0. or the dtype of the returned array. optional Points outside the boundaries of the input are ﬁlled according to the given mode (‘constant’. [ 9. 3)) >>> a array([[ 0.]]) >>> ndimage. for each point in the output. The order has to be in the range 0-5.. ‘reﬂect’ or ‘wrap’). preﬁlter=True) Map the input array to new coordinates by interpolation. scipy. the corresponding coordinates in the input. Multi-dimensional image processing (scipy. optional The order of the spline interpolation. mode : str. The value of the input at those coordinates is determined by spline interpolation of the requested order.SciPy Reference Guide..0rc1 scipy. The shape of the output is derived from that of coordinates by dropping the ﬁrst axis.ndimage) 337 . mode=’constant’. If False. coordinates : array_like The coordinates at which input is evaluated.. default is 3. ‘nearest’.]. optional The parameter preﬁlter determines if the input is pre-ﬁltered with spline_ﬁlter before interpolation (necessary for spline interpolation of order > 1)..reshape((4. 2. order=3.].).. geometric_transform.]. Returns return_value : ndarray The result of transforming the input. [[0. order=1) [ 2. optional Value used for points outside the boundaries of the input if mode=’constant’.] 4. order : int.0 preﬁlter : bool. 2]. [0.arange(12. 7.

0).5.10. Returns return_value : ndarray or None The rotated input. inds. while a[2. The array is rotated in the plane deﬁned by the two axes given by the axes parameter using spline interpolation of the requested order. Release 0.5. axes=(1. Parameters input : ndarray The input array. optional The two axes that deﬁne the plane of rotation. cval=0. default is 3. array([ True. mode : str.5] gives output[0]. 8.SciPy Reference Guide.map_coordinates(a. 1] is output[1]. The order has to be in the range 0-5. If output is given as a parameter. >>> ndimage. 2]. optional The parameter preﬁlter determines if the input is pre-ﬁltered with spline_ﬁlter before interpolation (necessary for spline interpolation of order > 1). cval=0.rotate(input. reshape=True. output=None. 0. ‘reﬂect’ or ‘wrap’).0rc1 Above.5. axes : tuple of 2 ints.0 preﬁlter : bool. order : int.map_coordinates(a. optional Points outside the boundaries of the input are ﬁlled according to the given mode (‘constant’. mode=’nearest’) order=1. inds. optional If reshape is true. Default is the ﬁrst two axes.map_coordinates(a.]) >>> ndimage. optional The order of the spline interpolation.3) order=1.array([[0.. the output shape is adapted so that the input array is contained completely in the output. -33.3]) >>> ndimage. order=3. False]. Default is 0. angle. Default is ‘constant’. angle : ﬂoat The rotation angle in degrees. If False. Reference . or the dtype of the returned array. Default is True. array([ 2. [0. cval : scalar. >>> inds = np. it is assumed that the input is already ﬁltered. dtype=bool 4]]) order=1. reshape : bool. optional Value used for points outside the boundaries of the input if mode=’constant’. . cval=-33. mode=’constant’. ‘nearest’.interpolation. output : ndarray or dtype. output=bool) scipy.ndimage. 338 Chapter 4. optional The array in which to place the output.0. array([ 2. the interpolated value of a[0. Default is True. inds. preﬁlter=True) Rotate an array. None is returned.

‘numpy. order=3. Default is 0.ndimage. it is assumed that the input is already ﬁltered. see spline_ﬁlter1d.10. output : ndarray or dtype.shift(input.ﬂoat64’>) Calculates a one-dimensional spline ﬁlter along the given axis.0rc1 scipy. The array is shifted using spline interpolation of the requested order. optional The array in which to place the output. order=3. Default is ‘constant’. order : int.0. cval=0. If a sequence. optional The parameter preﬁlter determines if the input is pre-ﬁltered with spline_ﬁlter before interpolation (necessary for spline interpolation of order > 1). ‘nearest’. For more details. shift : ﬂoat or sequence. Multi-dimensional image processing (scipy. Points outside the boundaries of the input are ﬁlled according to the given mode. optional Points outside the boundaries of the input are ﬁlled according to the given mode (‘constant’.ndimage) 339 . If False. The order has to be in the range 0-5. output=None. default is 3. See Also: spline_filter1d Notes The multi-dimensional ﬁlter is implemented as a sequence of one-dimensional spline ﬁlters.interpolation. If a ﬂoat. Default is True.SciPy Reference Guide. order=3. preﬁlter=True) Shift an array. shift should contain one value for each axis. None is returned. shift. ‘numpy. If output is given as a parameter. scipy. Parameters input : ndarray The input array. optional The shift along the axes. optional Value used for points outside the boundaries of the input if mode=’constant’.spline_filter(input.interpolation. output=<type output=<type 4. Returns return_value : ndarray or None The shifted input. mode : str.ndimage. axis=-1.12.ndimage. cval : scalar.ﬂoat64’>) Multi-dimensional spline ﬁlter.interpolation. Release 0. or the dtype of the returned array. the results may be imprecise because intermediate results may be stored with insufﬁcient precision. Therefore. optional The order of the spline interpolation. for output types with a limited precision.0 preﬁlter : bool. ‘reﬂect’ or ‘wrap’). shift is the same for each axis. scipy. mode=’constant’. The intermediate arrays are stored in the same data type as the output.spline_filter1d(input.

zoom. mode : str. it is assumed that the input is already ﬁltered. zoom : ﬂoat or sequence.float64.0rc1 The lines of the array along the given axis are ﬁltered by a spline ﬁlter. optional The axis along which the spline ﬁlter is applied. or the dtype of the returned array. Default is numpy. Default is ‘constant’. optional The parameter preﬁlter determines if the input is pre-ﬁltered with spline_ﬁlter before interpolation (necessary for spline interpolation of order > 1). zoom is the same for each axis.0 preﬁlter : bool. Reference . The order of the spline must be >= 2 and <= 5. or the dtype of the returned array. Returns return_value : ndarray or None The ﬁltered input. order : int. mode=’constant’. If output is given as a parameter. output=None. 340 Chapter 4.ndimage. optional Value used for points outside the boundaries of the input if mode=’constant’. output : ndarray or dtype. axis : int.zoom(input. scipy. Default is the last axis. default is 3. Default is True. zoom should contain one value for each axis. optional The order of the spline. preﬁlter=True) Zoom an array. optional The zoom factor along the axes. ‘nearest’. The array is zoomed using spline interpolation of the requested order. The order has to be in the range 0-5.0.interpolation. Parameters input : ndarray The input array. If False. Parameters input : array_like The input array. order : int. order=3. optional The array in which to place the output. optional Points outside the boundaries of the input are ﬁlled according to the given mode (‘constant’. default is 3. Release 0. optional The order of the spline interpolation. If a ﬂoat. output : ndarray or dtype.10. ‘reﬂect’ or ‘wrap’). cval=0.SciPy Reference Guide. cval : scalar. None is returned. Default is 0. optional The array in which to place the output. If a sequence.

measurements center_of_mass(input[.4 Measurements scipy.0]. index]) maximum_position(input[. Multi-dimensional image processing (scipy. Returns centerofmass : tuple.SciPy Reference Guide. structure. labels=None.ndimage) 341 . Calculate the variance of the values of an n-D image array. bins[.0].. labels : ndarray. labels. output]) maximum(input[. all labels greater than zero are used.labels. markers[.ndimage. labels. Label features in an array. index]) minimum(input[. labels. labels. index]) sum(input[. If not speciﬁed. index]) extrema(input[. Find objects in a labeled array. [0. [0. Calculate the standard deviation of the values of an n-D image array.1. Calculate the mean of the values of an array at labels. None is returned. min. If output is given as a parameter.12.1. index]) find_objects(input[.0rc1 Returns return_value : ndarray or None The zoomed input. labels. max_label]) histogram(input.1. Calculate the maximum of the values of an array over labeled regions.10. Parameters input : ndarray Data from which to calculate center-of-mass. labels. optionally at Apply watershed from markers using a iterative forest transform algorithm. as generated by ndimage. along with their positions. index]) variance(input[.0. labels. index]) mean(input[. Examples >>> a = np. index : int or sequence of ints.ndimage. labels. Calculate the minimums and maximums of the values of an array at labels.1. Release 0. index]) minimum_position(input[.]) Calculate the center of mass of the values of an array at labels. labels. or list of tuples Co-ordinates of centers-of-masses. optionally at labels. 4. index]) standard_deviation(input[.0. optional Labels for which to calculate centers-of-mass. max.measurements. optional Labels for objects in input. index=None) Calculate the center of mass of the values of an array at labels. index]) label(input[. Calculate the minimum of the values of an array over labeled regions.. Calculate the histogram of the values of an array. scipy.12. . Find the positions of the minimums of the values of an array at labels. labels. structure. 4. Calculate the sum of the values of the array. index]) watershed_ift(input. Find the positions of the maximums of the values of an array at labels. labels.array(([0.center_of_mass(input.0]. Dimensions must be the same as input.

maximum_position.0. 342 Chapter 4. 4.center_of_mass(a) (2. optional Labels of features in input.measurements. [9.5) Calculation of multiple objects in an image >>> b = np.0. 1. lbl.0])) >>> from scipy import ndimage >>> ndimage. [0. 9.0rc1 [0. See Also: maximum. 4].0. 2. 3]).0].1. minimum_position. maximums : int or ndarray Values of minimums and maximums in each feature.0].0]. If None (default). must be same shape as input. array([5. (3. 0. 7].1.0. Returns minimums. 0)) Features to process can be speciﬁed using labels and index: >>> lbl.5)] scipy. [5.1]. minimum. 3.measurements.center_of_mass(b.0. [0.extrema(a) (0. nlbl = ndimage.10. all values where non-zero labels are used. 7.extrema(input. center_of_mass Examples >>> a = np. 2.array(([0. 1. max_positions : tuple or list of tuples Each tuple gives the n-D coordinates of the corresponding minimum or maximum. [0. min_positions. 0. optional Labels to include in output. index=None) Calculate the minimums and maximums of the values of an array at labels. 0.1. lbl. index : int or sequence of ints.2]) [(0. 9]).5.1. nlbl+1)) (array([1. labels : ndarray.arange(1.ndimage.1])) >>> lbl = ndimage. labels=None.SciPy Reference Guide. (3.1. [0. [0. 3. Release 0.1. 0]. [1.label(a) >>> ndimage. (0.array([[1. index=np.extrema(a. Parameters input : ndarray Nd-image data to process.3333333333333333). Reference . 2).0.1. 0]]) >>> from scipy import ndimage >>> ndimage. If not None. 0.measurements.label(b)[0] >>> ndimage.33333333333333331. along with their positions. 0.

[0. None)).12. 0. 2. (3. max_label : int. max_label=2) [(slice(2. 3. slice(0. [0.0rc1 [(0. (3. 5. 2. (3. [(1. 3. 2. center_of_mass Notes This function is very useful for isolating a volume of interest inside a 3-D array. bins. 0. 5. None)). max.SciPy Reference Guide. slice(0. None). 0. 0]]) >>> ndimage. dtype=np. 0. 1. non-zero labels are processed: >>> ndimage. 0. None] scipy. 5] = 3 >>> a array([[2. 3].6). max. If max_label is not given. 1. None).0.0).0.0.int) >>> a[2:4. None)). optionally at labels. 0. Parameters input : ndarray of ints Array containing objects deﬁned by different labels.measurements. 0.find_objects(input. 3.ndimage) 343 . If a number is missing.histogram(input. 0. 5. 2. 9. 1. labels=None. (slice(0.0)]) If no index is given. 0]. Returns object_slices : list of slices A list of slices. 0)) scipy. lbl) (1. 0. 0]. None). and bins. (0. 0. slice(2.find_objects(a) [(slice(2. Release 0. Slices correspond to the minimal parallelepiped that contains the object. 0. 5. optional Maximum label to be searched for in input.0)]. (slice(0.ndimage. 4. 2:4] = 1 >>> a[4. 0]. >>> ndimage. Labels and index can limit the scope of the histogram to speciﬁed sub-regions within the array. index=None) Calculate the histogram of the values of an array. [0. that cannot be “seen through”.ndimage.find_objects(a == 1. 0].10. Examples >>> a = np. None).0). 0. 4] = 1 >>> a[:2.extrema(a.measurements. 1. 2. max_label=0) Find objects in a labeled array. 5. Histogram calculates the frequency of values in an array within bins determined by min. None). 2. 0. 0. slice(2. (slice(0. [0. one for the extent of each labeled object.find_objects(a.0). 1. 5. None)).0). (2. 0).zeros((6. None))] >>> ndimage. [2. slice(2.0.0. 0. the positions of all objects are returned. :3] = 2 >>> a[0. Multi-dimensional image processing (scipy. 3. (1. 0. 0. None is returned instead of a slice. 1.0. max_label=2) [(slice(2. 0. min. See Also: label. 1.

measurements. non-zero elements are counted: >>> lbl.7181. Any non-zero values in input are counted as features and zero values are considered the background. 0. [ 0. 0. .histogram(a. 1. structure : array_like. 0. 0]) Indices can be used to count only certain objects: >>> ndimage. 0.2787]. lbl. must be same shape as input. . 0. That is. output=None) Label features in an array. 0. If no structuring element is provided. nlbl = ndimage. . 1. 1. 0. ]. . If None. 1. 0. 0]) scipy. 2.0rc1 Parameters input : array_like Data for which to calculate histogram. 0]) With labels and no indices. 0. all values where label is greater than zero are used Returns hist : ndarray Histogram counts. [ 0.5962. 10) array([13.measurements. [ 0. 2. .2146.10. the default structuring element is: 344 Chapter 4.measurements. 2. 2) array([0. max : int Minimum and maximum values of range of histogram bins. 1. min. 0. 0. 1. optional Label or labels for which to calculate histogram. 1.3094]]) >>> from scipy import ndimage >>> ndimage. Reference . ]. Examples >>> a = np. structure=None. 0. 0.ndimage. lbl) array([0. 0. 10. 10.measurements. 0. Parameters input : array_like An array-like object to be labeled. Release 0. 0. 1. . 0. . 0. 0. one is automatically generated with a squared connectivity equal to one. . .6573. 2. bins : int Number of bins. 0.histogram(a. 0.SciPy Reference Guide. 0. 0. 1. 0. 0.label(a) >>> ndimage. 0.histogram(a. 1.array([[ 0. structure must be symmetric. 1.label(input. 1. ]. 0. 0. optional A structuring element that deﬁnes feature connections. labels : array_like. for a 2-D input array. [ 0. 1.7778. index : int or sequence of ints. If not None. . optional Labels for objects in input.

SciPy Reference Guide. 0.1. optional If output is a data type. 2. 0.0. 0. 0]. [2. then it will be updated with values in : ‘labeled_array‘ and only ‘num_features‘ will be returned by this function. Release 0. : See Also: find_objects generate a list of slices for the labeled features (or objects).. Multi-dimensional image processing (scipy.0rc1 [[0.ndimage) 345 .0.1. array_like). 0]]) Generate a structuring element that will consider features connected even if they touch diagonally: >>> s = generate_binary_structure(2. [0. 0. num_features = label(a) Each of the 4 features are labeled with a different integer: >>> print num_features 4 >>> print labeled_array array([[0. >>> s = [[1.1.0. 0. 0. useful for ﬁnding features’ position or dimensions Examples Create an image with some features.0. 0]. 0. this function returns a tuple.1.1].0]. 0.0]] output : (None. data-type. 4.1. [1.1.0]. .. : If ‘output‘ is an array. .0.1. [0. 0.10. [0.0]. 3. [1. [1. ‘num_features‘).. [0. [0..1]. 1.. then output will be updated with the labeled features from this function Returns labeled_array : array_like An array-like object where each unique feature has a unique value num_features : int How many objects were found If ‘output‘ is None or a data type.0].1. [1. then label it using the default (cross-shaped) structuring element: >>> a = array([[0. 1.0. 0..1.0]]) >>> labeled_array.2) or.0.0. it speciﬁes the type of the resulting labeled feature array If output is an array-like object. 0].1.0. : (‘labeled_array‘.1.1.12.0. . 1.1].1]] 4.

[12.ndimage. num_features = label(a.maximum(input. median. 0]. the maximum over all elements where labels is non-zero is returned.measurements. Returns output : ﬂoat or list of ﬂoats List of maxima of input over the regions determined by labels and whose index is in index. [0. 2. 0. 3. 0]. sum. 13. 10. labels=None. mean. If index or labels are not speciﬁed. 0]]) scipy. Examples >>> a = np.:2] = 1 >>> labels[2:. Parameters input : array_like Array_like of values.4)) >>> a array([[ 0. [2. labels must have the same shape as input. labels : array_like. a ﬂoat is returned: the maximal value of input if labels is None. extrema. 0.0rc1 Label the image using the new structuring element: >>> labeled_array. 1. If labels is not speciﬁed.zeros_like(a) >>> labels[:2. 0. If index is None. 0. 1. 0. 0. the maximum over the whole array is returned. For each region speciﬁed by labels. 15]]) >>> labels = np. 0. 0. standard_deviation Notes The function returns a Python list and not a Numpy array. 1.10. and 4 from above are now considered a single feature): >>> print num_features 2 >>> print labeled_array array([[0. optional A list of region labels that are taken into account for computing the maxima. [ 8.arange(16). Reference . variance. optional An array of integers marking different regions over which the maximum value of input is to be computed. index : array_like. See Also: label. 1. 11]. 1.SciPy Reference Guide. the maximal values of input over the region is computed. 9. 3]. use np. 0. 6. 1.reshape((4. 2. Release 0. 5. minimum. 346 Chapter 4. and the maximal value of elements where labels is greater than zero if index is None.array to convert the list to an array. [ 4. 0]. 0. 14. structure=s) Show the 2 labeled features (note that features 1. 1:3] = 2 maximum_position. index=None) Calculate the maximum of the values of an array over labeled regions. 7]. [0.

Multi-dimensional image processing (scipy. 14.0 >>> ndimage. [9. All elements sharing the same label form one region over which the mean of the elements is computed. 3.0 >>> b = np. 2. 4]. with the mean of the different regions labeled by the labels in index. optional Array of labels of same shape.maximum_position(input.label(b) >>> labels array([[1. index=None) Find the positions of the maximums of the values of an array at labels.maximum(b.measurements.0. 0]. 2].ndimage. 0. 2. 1.maximum(a. 1. labels=None.mean(input. 0.SciPy Reference Guide. If none.0] scipy. Release 0. labels=labels. Index must be None. 0]]) >>> labels. 3. 2. 0.array([[1. 0.ndimage) 347 . 0].12. 0. all values where label is greater than zero are used.arange(1. index : int or sequence of ints. or broadcastable to the same shape as input. labels_nb + 1)) [5. 0. [0. [3. 2]. 0. 0]. [1. 0.2]) [5. labels=labels) 14. 7]. 0]]) >>> ndimage. Returns out : list Sequence of same length as index. 3. labels=None. Default is None. [0. 0]. 9. [0. optional Labels of the objects over which the mean is to be computed. 7.maximum(a) 15. index=np.10. See Also: 4.ndimage. [0. 0]. 0. index=None) Calculate the mean of the values of an array at labels. Labels must be None or an array of the same dimensions as the input. labels : array_like.0. 0. 1. in which case the mean for all values where label is greater than 0 is calculated. labels=labels. 0. 1. [1. Parameters input : array_like Array on which to compute the mean of elements over distinct regions. 0. 2.0rc1 >>> labels array([[1. [5. labels_nb = ndimage.measurements.0.0] >>> ndimage. 2. index=[1. a single label or sequence of labels.maximum(a. 0]]) >>> from scipy import ndimage >>> ndimage. scipy.

. median.minimum(input. the minimum over all elements where labels is non-zero is returned.array([[1. 0. 0]. 0].. [0. Release 0. labels=labels.arange(25). 0.5)) >>> labels = np. the minimal values of input over the region is computed. 0. See Also: label.label Examples >>> a = np. index=index) [10. [0. If index is None. sum.ndimage.measurements.. 7].285714285714286. optional : A list of region labels that are taken into account for computing the minima. a ﬂoat is returned: the minimal value of input if labels is None. 0. standard_deviation Notes The function returns a Python list and not a Numpy array.maximum. Reference . use np.minimum.standard_deviation. 21. variance. 0.reshape((5. ndimage. and the minimal value of elements where labels is greater than zero if index is None. index=None) Calculate the minimum of the values of an array over labeled regions.zeros_like(a) >>> labels[3:5. optional : An array_like of integers marking different regions over which the minimum value of input is to be computed. 0]. [5. 1]) >>> ndimage. [0. . 3. minimum_position. If index or labels are not speciﬁed. 1. 0. Parameters input: array_like : Array_like of values. ndimage. 0.variance. the minimum over the whole array is returned. [0. 0. 348 Chapter 4.unique(labels) >>> labels array([[0. labels: array_like. extrema.0rc1 ndimage. 0. [0.array to convert the list to an array. 0. 4].3:5] = 1 >>> index = np. 0. 1]. .SciPy Reference Guide. Returns output : ﬂoat or list of ﬂoats List of minima of input over the regions determined by labels and whose index is in index. maximum. Examples >>> a = np. 0. 0. 1. 0]. mean.0] ndimage. labels must have the same shape as input. 1]]) >>> index array([0.sum. 0. 2.. 0. labels=None.mean(a. If labels is not speciﬁed. 0. ndimage. 0. ndimage. For each region speciﬁed by labels.10. index: array_like. scipy.

SciPy Reference Guide. index=None) Find the positions of the minimums of the values of an array at labels. 4. Multi-dimensional image processing (scipy. If not None. Parameters input : array_like Nd-image data to process. [9. optionally at speciﬁed sub-regions.minimum(a) 0. 0]]) >>> labels.10. 2.. 0]]) >>> from scipy import ndimage >>> ndimage. index : int or sequence of ints. index=None) Calculate the standard deviation of the values of an n-D image array.ndimage. 1. labels : array_like.minimum_position(input. 0. [0.label(a) >>> labels array([[1. index=np.minimum(a. 3. [9. 7]. optional Labels to identify sub-regions in input. index=np.standard_deviation(a) 2. labels_nb + 1)) [1.0. lbl.arange(1. 3.0rc1 . Returns std : ﬂoat or ndarray Values of standard deviation.0. See Also: label. 0. 0. [0. 0.ndimage. 0.479..array([[1. [5. 0]. variance. Labels must be None or an array of the same dimensions as the input. [1. If None (default). labels=labels) 1.measurements. 2].7585095613392387 Features to process can be speciﬁed using labels and index: >>> lbl.minimum(a. nlbl = ndimage. all values where label is greater than zero are used. 0]. labels=labels. 0]]) >>> ndimage. extrema Examples >>> a = np. 0. labels=None. 0.arange(1. minimum. for each sub-region if labels and index are speciﬁed. scipy.ndimage) 349 . ]) 4. 0. all values where labels is non-zero are used. 3.0 scipy. 3. must be same shape as input.measurements. 3. 1. labels=None.standard_deviation(a.label(a) >>> ndimage. 4]. maximum. [3.12. 0. 0. 1. If none.5 . optional labels to include in output. labels_nb = ndimage.0] >>> ndimage. 2]. 3.0 >>> ndimage. nlbl+1)) array([ 1.standard_deviation(input. a single label or sequence of labels. Index must be None. 0. Release 0.

Returns vars : ﬂoat or ndarray Values of variance. labels : array_like of ints. 5. optional Labels deﬁning sub-regions in input. lbl) 2. See Also: label.variance(input. maximum.2. Parameters input : array_like Values of input inside the regions deﬁned by labels are summed together.2] >>> sum(input. Returns output : list A list of the sums of the values of input inside the regions deﬁned by labels. labels=None.2]) [1. If None (default). Release 0.1.measurements. If not None. index=None) Calculate the sum of the values of the array. labels. index=[1. index=None) Calculate the variance of the values of an n-D image array. Has to have the same shape as input. optional Assign labels to the values of the array. optionally at speciﬁed sub-regions.4874685927665499 scipy. labels=None. minimum. extrema 350 Chapter 4.10. See Also: mean. must be same shape as input.0] scipy. index : scalar or array_like. optional labels to include in output. Parameters input : array_like Nd-image data to process.1. for each sub-region if labels and index are speciﬁed.sum(input. standard_deviation. index : int or sequence of ints. Reference . non-zero labels are processed: >>> ndimage.3] >>> labels = [1.ndimage.standard_deviation(a. labels : array_like.0.SciPy Reference Guide. optional A single label number or a sequence of label numbers of the objects to be measured.2. median Examples >>> input = [0. all values where labels is non-zero are used.0rc1 If no index is given.ndimage.measurements.

0.0rc1 Examples >>> a = np.measurements. 9. output=None) Apply watershed from markers using a iterative forest transform algorithm.watershed_ift(input.ndimage) 351 . Negative markers are considered background markers which are processed after the other markers. 4. 0. nlbl+1)) array([ 2.10. index=np. [0. [5. all non-zero labels are processed: >>> ndimage. 0. 3. ]) If no index is given.609375 Features to process can be speciﬁed using labels and index: >>> lbl. 7].variance(a) 7. An output array can optionally be provided.array([[1. Release 0.SciPy Reference Guide. markers. A structuring element deﬁning the connectivity of the object can be provided. If none is provided. Multi-dimensional image processing (scipy. 0.label(a) >>> ndimage. an element is generated with a squared connectivity equal to one. lbl) 6.12. 3.ndimage. 0].arange(1. 0]]) >>> from scipy import ndimage >>> ndimage. structure=None.1875. nlbl = ndimage. lbl.25 .variance(a.1875 scipy. 2.variance(a. [9. 0. 4]. 2.

structure=None. Multi-dimensional morphological gradient. size..]) morphological_laplace(input[. Generate a binary structure for binary morphological operations. Multi-dimensional binary erosion with a given structuring element. . or a footprint. Multi-dimensional binary dilation with the given structuring element... .]) binary_hit_or_miss(input[.]) footprint corresponding to a ﬂat structuring element. .5 Morphology scipy. Multi-dimensional black tophat ﬁlter. mask.]) morphological_gradient(input[. iterations[.12.]) iterate_structure(structure.]) grey_dilation(input[. The closing of an input image by a structuring element is the erosion of the dilation of the image by the structuring element.binary_closing(input. Parameters input : array_like Binary array_like to be closed.. size.. . metric. Multi-dimensional binary closing with the given structuring element.]) distance_transform_edt(input[. Multi-dimensional binary propagation with the given structuring element.]) footprint corresponding to a ﬂat structuring element. size. size.SciPy Reference Guide. Multi-dimensional greyscale closing.. ... grey_opening(input[. size.]) distance_transform_cdt(input[. Distance transform for chamfer type of transforms. . .]) black_tophat(input[... Multi-dimensional binary opening with the given structuring element. structure. structure.. . grey_erosion(input[.]) binary_dilation(input[.. size.. . size..10. footprint.ndimage. iterations=1..]) scipy. structure..]) binary_erosion(input[.. . connectivity) grey_closing(input[.... . Release 0... ..]) white_tophat(input[. Exact euclidean distance transform.0rc1 4. using either a structuring element. .. origin=0) Multi-dimensional binary closing with the given structuring element. Reference ..]) generate_binary_structure(rank.Multi-dimensional morphological laplace.ndimage. output=None..]) binary_fill_holes(input[. Non-zero (True) elements form the subset to be closed. 352 Chapter 4. .morphology binary_closing(input[. structure.. footprint.. structure. . .. footprint.. size. using either a structuring element. . sampling. or a footprint. Fill the holes in binary objects. structure.. Multi-dimensional white tophat ﬁlter..]) distance_transform_bf(input[.. Iterate a structure by dilating it with itself. Multi-dimensional binary hit-or-miss transform. footprint.Distance transform function by a brute force algorithm.. .. . metric.]) binary_propagation(input[. Multi-dimensional greyscale opening. structure1.. Calculate a greyscale erosion.morphology. Calculate a greyscale dilation.]) binary_opening(input[...

by default). a[2. 0]]) >>> # Closing removes small holes >>> ndimage. 1. binary_erosion. 1.int) >>> a[1:-1. References [R33]. 0. diagonallyconnected elements are not considered neighbors). 1:-1] = 1.2] = 0 >>> a array([[0. See Also: grey_closing. 0. 0.12. Release 0.10.e. optional Structuring element used for the closing.5). binary_opening. only nearest neighbors are connected to the center. a new array is created. 0]. Multi-dimensional image processing (scipy. [R34] Examples >>> a = np. 1. 0. 0.. generate_binary_structure Notes Closing [R33] is a mathematical morphology operation [R34] that consists in the succession of a dilation and an erosion of the input with the same structuring element. 1. Together with opening (binary_opening). [0. 1. 1. 0]. 1. 0]. 1.int) binary_dilation.ndimage) 353 . 0]]) >>> # Closing is the erosion of the dilation of the input >>> ndimage. [0. By default. origin : int or tuple of ints. 0. 0. [0.int) array([[0. 0. 0. each operations is repeated until the result does not change anymore. 1. 1.binary_closing(a).SciPy Reference Guide. dtype=np. 0. closing can be used for noise removal. ﬂoat}. then the erosion step are each repeated iterations times (one. Returns out : ndarray of bools Closing of the input by the structuring element. optional Array of the same shape as input. If no structuring element is provided an element is generated with a square connectivity equal to one (i. optional Placement of the ﬁlter. 0].astype(np. output : ndarray. 0].astype(np. 1. [0. into which the output is placed.binary_dilation(a). 0]. 1. 0]. iterations : {int. 1. 1. 0. 1. Non-zero elements are considered True.0rc1 structure : array_like. If iterations is less than 1. [0.zeros((5. [0. 1. Closing therefore ﬁlls holes smaller than the structuring element. 4. 0. by default 0. 0. [0. [0. 1. 0]. optional The dilation step of the closing.

1. 1. Non-zero elements are considered True. Non-zero (True) elements form the subset to be dilated. structure : array_like. 0. 0]. 0. [1. 1. 0. 1.binary_closing(a). 0. 0. 1. 0. dtype=np. 0. 0]. 1. 1. 0. 1.0rc1 array([[0. 0. 0. 0]. 0]. 1. 0. 1. 1.morphology. 0. 0. 1. [0. mask=None. >>> ndimage. 1. 0. 1. optional 354 Chapter 4. 0. 1. 0. 1. 1. 0. 1. 0. [0. 0. 0. [0. Parameters input : array_like Binary array_like to be dilated. optional Structuring element used for the dilation. 0. 0. 0]. 0. 0.2))). 1. [0. 0. 0. 1. 1. 1. [0. 1. a[1:3. [0. border_value=0. [0. iterations=1. 0. 1.ones((2. 0. 1. iterations : {int. 1. 0. 1. 0. 0. 0. [0. 0]]) >>> ndimage. 0]]) >>> # In addition to removing holes.SciPy Reference Guide. 0]. 0.zeros((7. [0.binary_dilation(a)). [0. 1.3] = 0 >>> a array([[0. 0]. [0. 1. [0. 0]. 0. 1. Release 0. [0. 1. 0. 0. [0. 1. [0. 1. 1]. closing can also >>> # coarsen boundaries with fine hollows. 0. 0. 1. 0. 0].ndimage. If no structuring element is provided an element is generated with a square connectivity equal to one. Reference . 1. 0]. 1.binary_closing(a. 1. 0]. 1. 0.astype(np. 0]. 0. 1. 1. 0.7). [0. 0]. [0. [0. 0. 0.int) array([[0. [1. 1. 0. 0. 0]]) scipy. 0. [0. structure=np. 1. [0. 1. [0. 0]. [0. 1. 0]. 1. 1. 1.astype(np. 0. 0. 0. 1. 1. 1.binary_erosion(ndimage. 1].int) array([[0. 0]. 1. 1. 1. brute_force=False) Multi-dimensional binary dilation with the given structuring element. 1. 0. 1. 0]. [0. 0. 0. 1. 1.int) array([[0. 1. 0. 1. 1. 0. 0. 1. output=None. 0]]) >>> a = np. 0. 0]. 0].astype(np.int) >>> a[1:6. 0.binary_dilation(input. ﬂoat}. [1. 1. 0. 0. 0. 0]. 0. 1. origin=0. 0]. 1. structure=None. 1. 0]. 0. 0]. 0. 0. 2:5] = 1. 0. 0. 0]]) >>> ndimage. 0. 1.10. 1]. 0.

. False.0rc1 The dilation is repeated iterations times (one. [ 0. [ 0. [False. a new array is created. 0.. 0..]]) >>> ndimage. 0.. border_value : int (cast to 0 or 1) Value at the border in the output array.binary_dilation(a) array([[False. False]]. 0. 0. [ 0.. 0. 0... 0.]. when its center lies within the non-zero points of the image. False. 0... 0.. False.12.. 0. References [R35].. 1.]. 0. See Also: grey_dilation. False.]. [False. 0. optional Array of the same shape as input. 0... optional If a mask is given... [ 0. 0. [False. 0.]. [R36] Examples >>> a = np. 0. [False. [ 0.. By default. Release 0. True.. 2] = 1 >>> a array([[ 0. 0. 0. True. 0. True.. 0. False.. origin : int or tuple of ints.]. 0. 0.. 0.zeros((5.SciPy Reference Guide... Returns out : ndarray of bools Dilation of the input by the structuring element. [ 0. optional Placement of the ﬁlter. 0. 1..]. 0. 4. False]. 0. 0. 1. [ 0. 0. 0. Multi-dimensional image processing (scipy. 0. False]..astype(a.. 1. binary_erosion. output : ndarray.ndimage) 355 . False.. the dilation is repeated until the result does not change anymore. False.10. False]. mask : array_like. [ 0. 0. The binary dilation of an image by a structuring element is the locus of the points covered by the structuring element... by default 0.. False.]. 0. If iterations is less than 1.. False].. by default). True. False. 1. only those elements with a True value at the corresponding mask element are modiﬁed at each iteration..].binary_dilation(a). 0.. False. 5)) >>> a[2. into which the output is placed.. 1. dtype=bool) >>> ndimage.. True.... generate_binary_structure Notes Dilation [R35] is a mathematical morphology operation [R36] that uses a structuring element for expanding the shapes in an image.. binary_opening.dtype) array([[ 0.]]) binary_closing.

dtype) array([[ 0. optional 356 Chapter 4. 0. True... True. [ True. an element is generated with a square connectivity equal to one. output=None.. 1. iterations : {int. 0. 1.. 1. [ 1. [ 0.. 0.0rc1 >>> # 3x3 structuring element with connectivity 1..binary_dilation(a. 0. If iterations is less than 1. 0. If no structuring element is provided. only those elements with a True value at the corresponding mask element are modiﬁed at each iteration.]. 1) >>> struct1 array([[False...astype(a.]. [ 0.. True... 1. 0.. [ 0. mask : array_like.ndimage. 0. structure=struct1.]..binary_erosion(input.]. 1. Non-zero elements are considered True. 1. 1..astype(a. output : ndarray. 1. 0. optional The erosion is repeated iterations times (one.. structure=struct2). [ True... 1.. ﬂoat}.. 0. structure : array_like. [ 0. True.. 0. 0.generate_binary_structure(2. 0.. 1.astype(a..]. optional Structuring element used for the erosion.. 0. [ True. 1. mask=None... structure=None.generate_binary_structure(2.. Non-zero (True) elements form the subset to be eroded. 0.. 0. dtype=bool) >>> # 3x3 structuring element with connectivity 2 >>> struct2 = ndimage. 0.binary_dilation(a.10. 0. [ 0....]... 0.. 0. 0.... 0. dtype=bool) >>> ndimage.].].]. 1.. True..dtype) array([[ 0. [ 0. [False.. Reference . Parameters input : array_like Binary image to be eroded. 1. 1.. 0.. 0.].... [ 0... 1.morphology. False]]. 1. iterations=2)...]. the erosion is repeated until the result does not change anymore. 2) >>> struct2 array([[ True. True]..]. 0. [ 0.SciPy Reference Guide.]]) >>> ndimage. optional If a mask is given. [ 0.]]) scipy. True. by default). 0. structure=struct1). 1. 1. 0.. 1. True]].. brute_force=False) Multi-dimensional binary erosion with a given structuring element.]]) >>> ndimage. 0. 0. False]. 1. 1. Binary erosion is a mathematical morphology operation used for image processing. 1.. 0. Release 0. [ 0. border_value=0..binary_dilation(a. 0... 0. 0... used by default >>> struct1 = ndimage. 1..\ .dtype) array([[ 0. 0. origin=0. 1. True]. 1.. 0. 1. [ 0. iterations=1.. True]...

optional : Placement of the ﬁlter. 1. 0. 0. 0. 0]]) >>> #Erosion removes objects smaller than the structure >>> ndimage. 0. References [R37]. 0. 0. 0. [0. 0. 0. 0]]) >>> ndimage. [0. 0. 0. 0.dtype) array([[0. 0. 0. 0. 0.zeros((7. 0. 0. 0.12. 0. [0.binary_erosion(a. Returns out: ndarray of bools : Erosion of the input by the structuring element. 0. 1.SciPy Reference Guide.ones((5. border_value: int (cast to 0 or 1) : Value at the border in the output array. 0. 0. 1. 1. 0. binary_opening. 0.ndimage) 357 . 0]. [0. 0].5))). 1. [0. 0. 0. 0.0rc1 Array of the same shape as input. 0. 0. 0. [R38] Examples >>> a = np. 0. 0. 0. 0. [0. 0]. [0. 0. 0]. 0]]) binary_closing. 0. 0. 0. 0. 0. 0]. 0. into which the output is placed. 0. 1. 0. 0. 0. 1. origin: int or tuple of ints. 0. 0].astype(a. 0. 0]. 0. 0. 0. 0. 0. 0. 0. 0. 0]. Multi-dimensional image processing (scipy. [0. 1. By default. 1. [0. [0. 0. 0. 0. generate_binary_structure Notes Erosion [R37] is a mathematical morphology operation [R38] that uses a structuring element for shrinking the shapes in an image. 0. 0]. 0. 0]. 0]. structure=np. 0. 1. 0. [0. 0. 0. 0. 0. 1. Release 0. 0. 0. 0. 0. 0. 0. 1.10. 0. 1. 0. 0. The binary erosion of an image by a structuring element is the locus of the points where a superimposition of the structuring element centered on the point is entirely contained in the set of non-zero elements of the image. 1. [0. 0. 1. 2:5] = 1 >>> a array([[0. See Also: grey_erosion. [0. 1. 0. by default 0. 4. 0].int) >>> a[1:6. 0]. [0. 0. 0. 1. 0.7). [0. 0]. 1.astype(a. 0]. 0. 0. 0]. 0.binary_erosion(a). [0. 0. [0. binary_dilation. dtype=np. 0. a new array is created. 0]. 0. 0].dtype) array([[0. [0.

1. output=None. 1. 1. 1:4] = 1 >>> a[2. a new array is created. [0.0rc1 scipy.morphology.binary_fill_holes(input. 0. References [R39] Examples >>> a = np. The result is the complementary subset of the invaded region. 0].binary_fill_holes(a. Holes are not connected to the boundary and are therefore not invaded. using binary dilations.astype(int) array([[0. 0]. 1. 1. [0. [0.binary_fill_holes(a). Release 0. 0. 0]. 1.ones((5. 0]. 0]. [0. tuple of ints. 1. 1. Parameters input: array_like : n-dimensional binary array with holes to be ﬁlled structure: array_like. 0. structure=None. 1. 1. 0].SciPy Reference Guide. structure=np. 1. large-size elements make computations faster but may miss holes separated from the background by thin regions. 0]. origin=0) Fill the holes in binary objects.10. [0.ndimage.2] = 0 >>> a array([[0. Reference . origin: int. 0]]) >>> # Too big structuring element >>> ndimage. 0]. 1. 1. 1. 1. [0. 5). 0. [0. optional : Position of the structuring element. output: ndarray. into which the output is placed. The default element (with a square connectivity equal to one) yields the intuitive result where all holes in the input have been ﬁlled. 0. 0. See Also: binary_dilation. optional : Array of the same shape as input. 0. 0. optional : Structuring element used in the computation. [0. dtype=int) >>> a[1:4.5))). 0]]) >>> ndimage. Returns out: ndarray : Transformation of the initial image input where holes have been ﬁlled. 1. By default. 0. binary_propagation. 0.astype(int) 358 Chapter 4. 1. 0. label Notes The algorithm used in this function consists in invading the complementary of the shapes in input from the outer boundary of the image. 0.zeros((5. 0.

a[4:6. [0. 1. by default 0 for a centered structure. 0]. 0. 4. 1. 0]. 0. a new array is created. 0. [0. origin2 : int or tuple of ints.morphology. structure2).10. 0. 1. If no value is provided. The hit-or-miss transform ﬁnds the locations of a given pattern inside the input image. binary_erosion References [R40] Examples >>> a = np. 1. [0. 0]]) scipy. Multi-dimensional image processing (scipy. output : ndarray. structure2 : array_like (cast to booleans). optional Second part of the structuring element that has to miss completely the foreground.ndimage. 1. Returns output : ndarray Hit-or-miss transform of input with the given structuring element (structure1. 0. 0. by default 0 for a centered structure. 0. 0. 4:6] = 1 >>> a array([[0. the complementary of structure1 is taken. optional Placement of the ﬁrst part of the structuring element structure1. 0.binary_hit_or_miss(input. 1. origin2=None) Multi-dimensional binary hit-or-miss transform.SciPy Reference Guide. 0]. into which the output is placed. 2:4] = 1. Release 0.zeros((7.0rc1 array([[0. structure1 : array_like (cast to booleans).7). 0. 1. 1. output=None. optional Array of the same shape as input. 1] = 1. 0]. 0]. 0. 0]. [0. By default. a structure of square connectivity 1 is chosen. optional Part of the structuring element to be ﬁtted to the foreground (non-zero elements) of input. structure2=None. a[2:4. 0. origin1=0. 0. 1.int) >>> a[1. origin1 : int or tuple of ints. If a value is provided for origin1 and not for origin2. optional Placement of the second part of the structuring element structure2. 0. 0. See Also: ndimage. If no value is provided. dtype=np.morphology.12. 0. Parameters input : array_like (cast to booleans) Binary image where a pattern is to be detected. [0. structure1=None. then origin2 is set to origin1.ndimage) 359 .

0.1) here >>> ndimage. [0. 0. 0. 0. 0. a new array is created. 0. 0]. [0.array([[1. structure=None. [0. 1. 0. iterations=1. 0]. If iterations is less than 1. 0. 0. 0. If no structuring element is provided an element is generated with a square connectivity equal to one (i. 0]. structure1=structure1). Reference . 0.astype(np. [0. 0. output=None. 0.int) array([[0. 1.morphology. 0].binary_hit_or_miss(a. 0. 0. 0. 0. 1. 1]]) >>> # Find the matches of structure1 in the array a >>> ndimage. 0. [0.. 0.binary_hit_or_miss(a. diagonallyconnected elements are not considered neighbors). 0. 0. 1. [0. output : ndarray. 0. 0. into which the output is placed. 0. 0. [0. 0. [0. 0.binary_opening(input. 0. 0. 0. [0. 0. 0. 1]]) >>> structure1 array([[1. 360 Chapter 4. 1.. 0. 0. 0. 0. 0. [0. 0]. origin=0) Multi-dimensional binary opening with the given structuring element.e. [0. Parameters input : array_like Binary array_like to be opened. 0. 1. 0].\ . [0. 0. 0. 0. 0. [0.. 0]. 0. 0]. 0. Non-zero elements are considered True. 1. 1. 0]. 0. 0. ﬂoat}. 0. 0. 0. 0. [0. 0. structure : array_like. Release 0. by default). 0. 1].astype(np. 1. [0. 0. optional The erosion step of the opening. 0. 1]. 1. 0]. each operation is repeated until the result does not change anymore. iterations : {int. 0. then the dilation step are each repeated iterations times (one. 0]. 0]. 0. 0. 0. By default. 0]. 0. 0]. 0]. 1. 0]]) >>> structure1 = np. optional Array of the same shape as input. origin1=1). 0. 1.int) array([[0. 0. 0. 0. 0. 0. The opening of an input image by a structuring element is the dilation of the erosion of the image by the structuring element. 0. [0. 0. 0. 0. 0.ndimage. Non-zero (True) elements form the subset to be opened. 0]. 0. 0. 0. 0. [0. only nearest neighbors are connected to the center. 0. 1. 0. [0.10. 0]. [0. structure1=structure1. 1. 0. 0]]) >>> # Change the origin of the filter >>> # origin1=1 is equivalent to origin1=(1. [0.0rc1 [0. 0. 0. 1. 0]]) scipy. 0.SciPy Reference Guide. 0]. 0. 0. 1. optional Structuring element used for the opening. 0. 0. 0.

1. 0. 1. 0].10. Release 0. 0]. 0]. 1. binary_closing. 0. structure=np.binary_erosion(a)). 0. 0. 0].astype(np. 4. 0]. 1. 0]. 0. [0. References [R41]. 1:4] = 1. 1]]) >>> # Opening removes small objects >>> ndimage. 0]. [0. optional Placement of the ﬁlter.12. [0. 0]]) >>> # Opening can also smooth corners >>> ndimage. 0].binary_opening(a). generate_binary_structure Notes Opening [R41] is a mathematical morphology operation [R42] that consists in the succession of an erosion and a dilation of the input with the same structuring element. [0. [0. 1. 0]. 0.binary_dilation(ndimage. 1. 0. [0. 0. 1. 1. 0]. 0]. [0. 0]. binary_erosion. 0. 0. 0. 0. by default 0. 1.int) array([[0. 1. 1.int) >>> a[1:4. 4] = 1 >>> a array([[0. 0. 0. [0. 1. 0.astype(np. 0. 1. 1.int) array([[0.ones((3. 0. 1. 0. opening can be used for noise removal.ndimage) 361 . 0]. 0. 1. [0. 0. 1. 1. [0. 0. 0]. 0. [R42] Examples >>> a = np. 1. [0. [0. 0. Opening therefore removes objects smaller than the structuring element. 0.astype(np. 0. 0].binary_opening(a. 0. 1. 0. 0].astype(np. 0. 1. 1. 1.SciPy Reference Guide. 0. Multi-dimensional image processing (scipy. [0. 1. 0. 0. 0]. 0]]) >>> ndimage. binary_dilation.int) array([[0.3))). 0. 0]. 0. 0. dtype=np. 1. 0. 0. 0. 0. Together with closing (binary_closing). 1. 1.zeros((5. [0. [0. 0.binary_erosion(a). 0]. [0.0rc1 origin : int or tuple of ints. [0.int) array([[0. a[4. 1. 0]]) >>> # Opening is the dilation of the erosion of the input >>> ndimage. [0. 0.5). Returns out : ndarray of bools Opening of the input by the structuring element. 0. See Also: grey_opening.

0. 8). 0. an element is generated with a squared connectivity equal to one. 0. by default 0. 0. The succession of an erosion and propagation inside the original image can be used instead of an opening for deleting small objects while keeping the contours of larger objects untouched.int) >>> mask[1:4. The output may depend on the structuring element. structure : array_like Structuring element used in the successive dilations. 0. 0. 0.binary_propagation(input. dtype=np. 2] = 1 >>> mask = np. 0]. 0.int) >>> input[2. [0. origin=0) Multi-dimensional binary propagation with the given structuring element. 0. [0. 0. 0. optional Array of the same shape as input. 6:8] = 1 >>> input array([[0. 0. mask=None. 0.zeros((8. 0.SciPy Reference Guide. References [R43]. 0. 0. origin : int or tuple of ints. 0. Parameters input : array_like Binary image to be propagated inside mask. 0. 0. especially if mask has several connex components. 0. 0. a new array is created.morphology. 0. 362 Chapter 4. 1. output=None. 0. Notes This function is functionally equivalent to calling binary_dilation with the number of iterations less then one: iterative dilation until the result does not change anymore. 0. 0. 0]]) scipy. 0. 0]. Reference .10. border_value=0. 4] = mask[6:8. mask : array_like Binary mask deﬁning the region into which input is allowed to propagate. [0. 0. [0. 0. structure=None. 0. Returns ouput : ndarray Binary propagation of input inside mask. 0. 1:4] = mask[4. 0. 0. 0]. By default. 0. Release 0. [0. 0. 0]. optional Placement of the ﬁlter. 0]. 0.ndimage. 0]. 0. If no structuring element is provided. 0. 0.0rc1 [0. 0. [R44] Examples >>> input = np. 0. 0. 0. [0. 0. 0. 8).zeros((8. [0. 0. dtype=np. 0]. into which the output is placed. 1. output : ndarray. 0]. 0.

0]. 0. 1. 0. [0. 0]. 0]. [0. 0. 0]. 0. 0. 0]. 0. 1. 0. 0. 0]. structure=np. 0. 0. 0. 0. 1. 0. 1. 1. 0. 0]]) >>> ndimage. 0]]) >>> b = ndimage. 0.astype(np. 0. 1. 1]]) >>> ndimage. 1. 0. 0.astype(np. 0. a[5. [0. 0. 0. 0. 0. 0. [0. [0. [0. 0. 1. 0]. 1. 1]. 0. 0. 0. 0. 0]. 1. 0. 1.astype(np. 0. 0. 1. 0. 0. 0.int) array([[0. 0. 0]]) >>> mask array([[0. 0]]) >>> ndimage. [0. 0.binary_erosion(a) >>> b. 1. 0.ndimage) 363 . 0. 0. a[0. 0. 0. 0]. 0. 0. [0. 0. 1]]) >>> ndimage. [0. 0. 1. 0. mask=mask. 1. 0. 0. 0.10. 1. 0. 0. 0. 0. 0. 1. 1. 1.binary_propagation(input. 0. 0. 1. 0]. 1. 0. 0. 0. 0. [0. 0. 1. 0. 0. 0. 0]. 0. 0. [0. 0. [0. 0]. 0.binary_propagation(b. 1. 0]. 0. mask=mask). 0. 0. 0. 0]. 1.SciPy Reference Guide. [0. 0. 0. 0]. 1. 0].astype(np. 0. 0. 0]. 0. 0. 0. 0.3))). 0. [0. 0. 0. [0. 0. 4. 0]. 0. 0. [0. 0. 0. 0. 0. 1. 0. 0. dtype=np. 0. [0. 1. 0. 0.binary_propagation(input.binary_opening(a). 0]. 0. 0. 0.int) array([[0. 0. 1. 0. 0. 0. 0. 0. [0. 0. 0. 0. 0. Release 0. 0. [0. 0]. 0. [0. 0. [0. 0. 0. 0. 0. 1. 0.\ .6). 0.0rc1 [0. 0. 0. 1. 0.astype(int) array([[0. 0.int) >>> a[2:5. 0. 1. 0. 0]. 1. [0. 2:5] = 1. 0. [0. 0]. 0. 0. 0]. 0. 0. 1. 1. [0. 1.int) array([[0. 0. 0. [0. 0. 0. 0. 1. 0]. 0. mask=a). 0. 0. 0. 0. 0] = 1. 0. 0. 1. [0. 0. 0. 0. 1. 0. 0. 0.zeros((6. 0. 0]. 0. 0]. 0. 0. 0.. 0. 0. 0]. 0]. 1. 0]]) >>> # Comparison between opening and erosion+propagation >>> a = np. 0. 0. 0]. [0. Multi-dimensional image processing (scipy. 0. 0]. 0. 1. [0. 0. 1. [0. 0. 0. 1. 0. 0]. [0. 0. 0. 0. 0]. 0. [0. 1. [0. 0. [0. 0. 0. 0. 0. 0]. [0. [0. 0. 0..12. 0. 1. 0. [0. 0]. 0]. 0. 0]. 0. 1. 0. 0. [0.int) array([[0. 0. 0]. 0. 5] = 1 >>> a array([[1. 0. 0. 1. 0.ones((3.

morphology. The metric determines the type of chamfering that is done. Reference . 1. 1. The return_distances.morphology. This parameter is only used in the case of the euclidean distance transform. with its shortest distance to the foreground (any element non-zero). These choices correspond to the common interpretations of the taxicab and the chessboard distance metrics in two dimensions.black_tophat(input. sampling=None. 0].ndimage. the distances and indices arguments can be used to give optional output arrays that must be of the correct size and type (ﬂoat64 and int32). 0]. the feature transform. or a single number in which the sampling is assumed to be equal along all axes. 0.10. In addition to the distance transform. structure=None. the feature transform can be calculated. metric=’euclidean’. cval=0. return_indices=False. where cval is the value when mode is equal to ‘constant’.ndimage. 1.SciPy Reference Guide. origin=0) Multi-dimensional black tophat ﬁlter. 1. The origin parameter controls the placement of the ﬁlter. Either a size or a footprint. See Also: grey_opening. 364 Chapter 4.ndimage. distances=None. 0. see also the function distance_transform_cdt for more efﬁcient taxicab and chessboard algorithms. size=None. return_indices=False. or the structure must be provided. [0. footprint=None.morphology. 1. Optionally the sampling along each axis can be given by the sampling parameter which should be a sequence of length equal to the input rank. 1. [R46] scipy. 0]]) scipy. and return_indices ﬂags can be used to indicate if the distance transform. Release 0. 0. 0. Three types of distance metric are supported: ‘euclidean’. If the metric is equal to ‘taxicab’ a structure is generated using generate_binary_structure with a squared distance equal to 1.0. or both must be returned. return_distances=True. indices=None) Distance transform function by a brute force algorithm. mode=’reﬂect’. 1. the feature transform can be calculated. In this case the index of the closest background element is returned along the ﬁrst axis of the result. 0. return_distances=True. 0. scipy. 1.distance_transform_cdt(input. The mode parameter determines how the array borders are handled. a metric is generated using generate_binary_structure with a squared distance equal to the rank of the array. grey_closing References [R45]. In addition to the distance transform. 0. 0]. In this case the index of the closest background element is returned along the ﬁrst axis of the result. If the metric is equal to ‘chessboard’. This function calculates the distance transform of the input. output=None. [0. ‘taxicab’ and ‘chessboard’. metric=’chessboard’. [0. by replacing each background element (zero values). indices=None) Distance transform for chamfer type of transforms. An output array can optionally be provided. This function employs a slow brute force algorithm.distance_transform_bf(input. distances=None.0rc1 [0. 1.

Default is True. Returns result : ndarray or list of ndarray Either distance matrix. optional Whether to return indices matrix.SciPy Reference Guide. must be of type ﬂoat64. The distances and indices arguments can be used to give optional output arrays that must be of the correct size and type (both int32). Notes The euclidean distance transform gives values of the euclidean distance: n y_i = sqrt(sum (x[i]-b[i])**2) i where b[i] is the background point (value 0) with the smallest Euclidean distance to input points x[i]. or both must be returned. must be of type int32. indices : ndarray. sampling : ﬂoat or int. 4. If a sequence. scipy. Multi-dimensional image processing (scipy.distance_transform_edt(input. index matrix. the feature transform. At least one of return_distances/return_indices must be True. this is used for all axes. optional Spacing of elements along each dimension.0rc1 The return_distances. Can be any type but will be converted into binary: 1 wherever input equates to True. distances=None. the feature transform can be calculated. optional Whether to return distance matrix. or sequence of same. must be of length equal to the input rank. and n is the number of dimensions. optional Used for output of indices. If not speciﬁed. indices=None) Exact euclidean distance transform. distance : ndarray. In this case the index of the closest background element is returned along the ﬁrst axis of the result. or a list of the two.12. if a single number. a grid spacing of unity is implied. return_distances : bool. 0 elsewhere. and return_indices ﬂags can be used to indicate if the distance transform. depending on return_x ﬂags and distance and indices input parameters. Default is False.morphology. optional Used for output of distance array. return_distances=True. In addition to the distance transform. return_indices : bool. Release 0. Parameters input : array_like Input data to transform.10.ndimage) 365 . return_indices=False. sampling=None.ndimage.

3]. 1.array(([0. . [ 0.SciPy Reference Guide. 0. 4. 1. [0. 1.1.2361.1.1].1. . 4]. [ 0. [4. 4.8284. 3]. . [ 0. [3. . 2. . 0. 3]. 2. . [ 0.1. [0. 1. [2. 1. .1]) array([[ 0. . 1.int32) >>> ndimage. dtype=np. 0. 0. [2. 3. 1. . 2. . 3. sampling=[2. [0. 0.1. [0. Reference . 1. 1. 1. 1. indices=indices) array([[ 0. . . 4.4142. 0. 2. 1. 0. [0.2361. 3].4142. 3. return_indices=True) >>> inds array([[[0.0rc1 Examples >>> a = np. 1. 0.1.0. . 0.4142. 1. .1]. 2. 4. 1. 4. 1. 0. [ 0. 3. 1. 2. . 1. . ]. ]. . [ 0. [0. [0. 1. . 1. . 0.1].1. .4142. . 3. [ 0. .1. 1. 4. . ]]) With a sampling of 2 units along x. 1. 1. 4. . ]. 0. 1. [ 0. .zeros(((np. 0. . 1. 4. 3. return_indices=True.10.1. 2. 1. . 3]. 1. [ 0. 3]. 3. [0. 1. [0. 0. 0. . 4]]. 0. 1. 1. . .rank(a). 2. 4]]]) With arrays provided for inplace outputs: >>> indices = np. . 4. ].4142. 1. 3. . 4. 0. 4]. 4]. 1. 4]. ]. 1 along y: >>> ndimage.4142. 1.) + a. 4.4142. 4]]]) 366 Chapter 4. ]. [ 0. 3.0. Release 0.2361. 0. . 0. 3. 2. . . 1. 2. 0. . . [0. 1. 1.distance_transform_edt(a. 4]. [4. ]]) Asking for indices as well: >>> edt. ]. . . 3. . [1. 1.0]. 4. 2. 2. [ 0. . 1. 1. 2.shape). 3]. 1.1. 1. 3]. 1. 1. inds = ndimage. 2.1. 3. . 4]]. [0. [1. 4]. 3. . [3. 1. . . 1. 1.distance_transform_edt(a. [[0. 1. 4]. 1.6056]. 0. ]]) >>> indices array([[[0. [0.0])) >>> from scipy import ndimage >>> ndimage. 3. 3. ]. 1. ]. 1.1. 3. [[0. . [ 0. 1. ].distance_transform_edt(a. 1. 2. . 0. 1. 1.4142. 4]. 1.1.distance_transform_edt(a) array([[ 0. ]. .

generate_binary_structure(2. 0. Returns output : ndarray of bools Structuring element which may be used for binary morphological operations. True. For larger structuring elements. True].]. 0. 0. [ True. 1. 1. 1.12.g. binary_erosion Notes generate_binary_structure can only create structuring elements with dimensions equal to 3.. [ 0.. dtype=bool) >>> a = np. or create directly custom arrays with numpy functions such as numpy. 1.astype(a. 0. [ 0. 0. 0. 1. [False. 1.ndim....ndimage) 367 . 0.. 0.]..].ones.generate_binary_structure(rank. connectivity may range from 1 (no diagonal elements are neighbors) to rank (all elements are neighbors).. 0. False].morphology... 0.binary_dilation(a. 1..5)) >>> a[2. 1. False]]..0rc1 scipy. as returned by np... [ 1.e..binary_dilation(b..... that are useful e. are considered as neighbors of the central element.SciPy Reference Guide.10. [ 0.]]) >>> b = ndimage. 0.. 0.....dtype) array([[ 0. 1.. 0.... 0. 1.. 1. Parameters rank : int Number of dimensions of the array to which the structuring element will be applied... i.. connectivity : int connectivity determines which elements of the output array belong to the structure. 0. [ 0... structure=struct).dtype) >>> b array([[ 0... 1. [ 0.. 0. 0.. 2] = 1 >>> a array([[ 0.].]]) >>> ndimage. 1.. 0.]. 0. 0. 0. 0. 0. 0.. 0. binary_dilation..astype(a..ndimage.]. 1. 0.. 1. 0. 0. True.. [ 0... 0. True.. Examples >>> struct = ndimage. 0. 0. 0. 1..]. one may either use iterate_structure. Multi-dimensional image processing (scipy. 0. 0.]. [ 0. 0. 0. Release 0.]]) 4.]... 0. [ 0. for eroding large objects. 0. structure=struct). 0. 0..e. 0.]. [ 0.]. [ 0. Elements up to a squared distance of connectivity from the center are considered neighbors.zeros((5... 0. with rank dimensions and all dimensions equal to 3... i. 1) >>> struct array([[False. 1..].. See Also: iterate_structure.. 1. connectivity) Generate a binary structure for binary morphological operations... [ 0. minimal dimensions.

False]. optional Positions of non-inﬁnite elements of a ﬂat structuring element used for the grayscale closing. [ True. optional Value to ﬁll past edges of input if mode is ‘constant’. True]. False]]]. True. True.’nearest’. optional Structuring element used for the grayscale closing. optional An array used for storing the ouput of the closing may be provided. output : array. True]]. optional The mode parameter determines how the array borders are handled. 368 Chapter 4.’constant’. True]. False]. optional The origin parameter controls the placement of the ﬁlter. [False. False. [False. True. ‘wrap’}. [False. Default is ‘reﬂect’ cval : scalar.generate_binary_structure(2. False]]. [ True.0. Default 0 Returns output : ndarray Result of the grayscale closing of input with structure. Release 0.0. [[False. False]]. [ True. Optional if footprint is provided. where cval is the value when mode is equal to ‘constant’. True. True.0rc1 >>> struct = ndimage. mode : {‘reﬂect’. Parameters input : array_like Array over which the grayscale closing is to be computed.morphology. [[False.SciPy Reference Guide. False. [False.grey_closing(input. origin=0) Multi-dimensional greyscale closing. output=None.ndimage. cval=0. structure : array of ints.generate_binary_structure(3. True. [False. False]. mode=’reﬂect’. origin : scalar. footprint : array of ints. and a greyscale erosion. 1) >>> struct # no diagonal elements array([[[False. False]. True. Reference . False. 2) >>> struct array([[ True.10. size=None. dtype=bool) scipy. False]. True. footprint=None. Default is 0. structure may be a non-ﬂat structuring element.’mirror’. dtype=bool) >>> struct = ndimage. False. structure=None. size : tuple of ints Shape of a ﬂat and full structuring element used for the grayscale closing. A greyscale closing consists in the succession of a greyscale dilation. True].

13. 32. whereas binary closing ﬁlls small holes. 27.6)) >>> a[3. grey_opening. 19. 25. 14. size=None. 26. 15. 0. 11]. 20. 20. it can be viewed as a maximum ﬁlter over a sliding window. Multi-dimensional image processing (scipy. 15. 23]. 34. 10. 17]. 26. 4. 9. [19. 10. scipy. 28.3)) array([[ 7. For the simple case of a full and ﬂat structuring element. Optional if footprint is provided.10. 22. using either a structuring element. 34.0rc1 See Also: binary_closing. 28. [18. 14. generate_binary_structure Notes The action of a grayscale closing with a ﬂat structuring element amounts to smoothen deep local minima. [30. 19. 2. 8. structure=None. 7. 11]. 11].SciPy Reference Guide. 25. 35]]) >>> # Note that the local minimum a[3. 33. 17]. 7. 8. 31. 10. 33.12. mode=’reﬂect’. size : tuple of ints Shape of a ﬂat and full structuring element used for the grayscale dilation. [ 7. [24. 35]]) >>> ndimage. or a footprint corresponding to a ﬂat structuring element. References [R47] Examples >>> a = np. structure : array of ints. footprint=None. [ 6. 4. 16. 16. Non-zero values give the set of neighbors of the center over which the maximum is chosen.grey_closing(a. optional Structuring element used for the grayscale dilation. [25. 3. 5]. footprint : array of ints. 31.0. [13. 8. 9. 32. [31.grey_dilation(input. Release 0. 22. optional Positions of non-inﬁnite elements of a ﬂat structuring element used for the grayscale dilation. 27.ndimage) 369 . [12. 9. Parameters input : array_like Array over which the grayscale dilation is to be computed.ndimage.3] has disappeared grey_erosion. Grayscale dilation is a mathematical morphology operation.arange(36).reshape((6. size=(3. structure may be a non-ﬂat structuring element. cval=0.morphology. 1. output=None. 20. 13. 23]. 7. 29].3] = 0 >>> a array([[ 0. 29]. origin=0) Calculate a greyscale dilation. grey_dilation.

1. 0. 3. 0. Reference . 0. 0]. 0. 2. footprint=np. 0.3))) grey_opening. 1. 0]. 1. 0. 1. 0. 1. Default is 0. 0]. where cval is the value when mode is equal to ‘constant’. 2. optional An array used for storing the ouput of the dilation may be provided. 2.0rc1 output : array. [0. [0. 0. 0]. dtype=np. 0. 0]. [0. 0]. Release 0. 0. a[2. See Also: binary_dilation. 3.int) >>> a[2:5. optional Value to ﬁll past edges of input if mode is ‘constant’. 3. 370 Chapter 4. 0. [0. 0. optional The origin parameter controls the placement of the ﬁlter. 0.’mirror’. 0].0.ones((3.7). 1. 2. 3. 1. the grayscale dilation computes the maximum of the input image inside a sliding window deﬁned by E. 0. [0. ndimage.grey_dilation(a. generate_binary_structure. for y in E} In particular. [0. 0. 0. 1. Default 0 Returns output : ndarray Grayscale dilation of input. [0. 2. 3. 0. [R49] Examples >>> a = np. 1. 1.’constant’. 0. 2.10. 0. [0.3)) array([[0. mode : {‘reﬂect’. 0. References [R48]. ‘wrap’}.4] = 2. 0]. [0. 0. grey_erosion. 0]. 0]. 0]]) >>> ndimage. optional The mode parameter determines how the array borders are handled. 1. 2.SciPy Reference Guide. 0. for structuring elements deﬁned as s(y) = 0 for y in E. 0. 2:5] = 1 >>> a[4. 1. 0. 0.3] = 3 >>> a array([[0. 0.zeros((7.’nearest’. 1. 3. 0.maximum_filter Notes The grayscale dilation of an image input by a structuring element s deﬁned over a domain E is given by: (input+s)(x) = max {input(y) + s(x-y). 3. 1. 0. [0. [0. 3. 0. grey_closing. size=(3. 1. 0].grey_dilation(a. 0. 2. 3. 0]. origin : scalar. 0. Grayscale dilation [R48] is a mathematical morphology operation [R49]. 0]]) >>> ndimage. 0. 1. 0. [0. 0. 0. 0. Default is ‘reﬂect’ cval : scalar. 3.

0. 2. 1. 3. 2. 1.generate_binary_structure(2. False]]. 0. 1. [1. 1. [False. 1. 0. True. 0. True. 3. 0. [0. 4. 1. 1. size=None. [0. 1.1) >>> s array([[False. Release 0. 3.3))) array([[1. 1. 0]. 2. 0]]) >>> s = ndimage. 3. 4. optional An array used for storing the ouput of the erosion may be provided. [1. 2.3). Parameters input : array_like Array over which the grayscale erosion is to be computed. 3. 4. 1. 0. 0]. Non-zero values give the set of neighbors of the center over which the minimum is chosen. 1. [0. 3. 2. 1. For the simple case of a full and ﬂat structuring element. 0. 3. 2. [ True. 1]. [1.morphology. optional Positions of non-inﬁnite elements of a ﬂat structuring element used for the grayscale erosion.grey_dilation(a. it can be viewed as a minimum ﬁlter over a sliding window. 2. 0. 0. True]. [0. 1.grey_erosion(input. 1. 3. output : array. 0. [0. [0. 0. structure may be a non-ﬂat structuring element. True. [1. 0. 3. 3. 0. 1. 3. 2. size : tuple of ints Shape of a ﬂat and full structuring element used for the grayscale erosion. 1. 0]. 1. 3. 2. 0]. 1].ones((3.0. 0. 1. 1]. 4. Multi-dimensional image processing (scipy. 0. [0. 2. 4. 0]. [0. mode=’reﬂect’. cval=0. 1. False]. footprint : array of ints. 3. 0.12. 1. using either a structuring element. or a footprint corresponding to a ﬂat structuring element. [0. 0. 2. 4. 2. 1. origin=0) Calculate a greyscale erosion. optional Structuring element used for the grayscale erosion. footprint=None. 1. 0. 3.grey_dilation(a. 3. 0.ndimage. 3. 0]. [1. 1. 0]. footprint=s) array([[0. 2. [0.10. 1. 1. Grayscale erosion is a mathematical morphology operation. 0]. 0]. 2. 0. 2. 3. 1. 0]. 0]. 1]. 0]]) >>> ndimage. 3. 0. 0]. 2. 4. 4. 3. 1. Optional if footprint is provided. 2.ndimage) 371 . 2. 1].SciPy Reference Guide. output=None. dtype=bool) >>> ndimage. 1.0rc1 array([[0. [0. 1]. size=(3. 0. structure=np. 1. 3. [1. 3. 1]]) scipy. [0. 2. structure : array of ints. 2. 1. structure=None. 4. 4. 2. 0. 0.

3. 0]. True.int) >>> a[1:6. 0. 3. Release 0. 0]. 0]. 0. 0.’mirror’. 0]]) >>> ndimage. References [R50]. origin : scalar. 2. 1. True.0rc1 mode : {‘reﬂect’. 0. [0. 0. 0. 1. 0. 0.minimum_filter Notes The grayscale erosion of an image input by a structuring element s deﬁned over a domain E is given by: (input+s)(x) = min {input(y) . 0. 3. 3.3] = 1 >>> a array([[0. [0. 2. for structuring elements deﬁned as s(y) = 0 for y in E. 0]. 0. dtype=np.SciPy Reference Guide. 1. [ True.grey_erosion(a. 0. Default is 0. 0. 3.s(x-y). 0.’nearest’. 0].zeros((7. 0. 3. Grayscale erosion [R50] is a mathematical morphology operation [R51]. 0]]) >>> footprint = ndimage. 3. 0. 3. optional The origin parameter controls the placement of the ﬁlter. ‘wrap’}. 0. 0. [R51] Examples >>> a = np. ndimage. [0. for y in E} In particular. 3. 3. 0. grey_closing. True]. optional The mode parameter determines how the array borders are handled. 0]. 0. a[2. 0. Default 0 Returns output : ndarray Grayscale erosion of input. 0. 0. 3. 0]. optional Value to ﬁll past edges of input if mode is ‘constant’. [0. 0]. 1:6] = 3 >>> a[4. 3.0. 1. 0]. grey_dilation. 0. 1. [0. 0. False]. 0]. the grayscale erosion computes the minimum of the input image inside a sliding window deﬁned by E. [0. 0]. 0. where cval is the value when mode is equal to ‘constant’. 1. [0. 372 Chapter 4. Default is ‘reﬂect’ cval : scalar. 0. 0. 3. 3. [0. generate_binary_structure. [0. 3. 3. [0. size=(3. 3. 0. 2. 1. 0. 3. 0. 3. 3. [0. 0. Reference . 1) >>> footprint array([[False. [0.7). 0. See Also: binary_erosion.10. 3. grey_opening. 0. 3. 3.4] = 2. 0. 0. 3.’constant’.generate_binary_structure(2. 0.3)) array([[0. 0].

A greyscale opening consists in the succession of a greyscale erosion. Default is 0. 1.grey_opening(input. True. grey_dilation.0rc1 [False. 0. 0. cval=0. 0. optional The mode parameter determines how the array borders are handled. 0. 4.ndimage) 373 . origin=0) Multi-dimensional greyscale opening. Default 0 Returns output : ndarray Result of the grayscale opening of input with structure. where cval is the value when mode is equal to ‘constant’. 0]. 2. 3. optional Positions of non-inﬁnite elements of a ﬂat structuring element used for the grayscale opening. structure may be a non-ﬂat structuring element. and a greyscale dilation. footprint : array of ints. footprint=None. 0. [0.’constant’. 0. optional Value to ﬁll past edges of input if mode is ‘constant’. 0.morphology. Optional if footprint is provided.SciPy Reference Guide. [0. 0. 3. 0. 1. 0. ‘wrap’}. 0. output=None. size=(3. 0. See Also: binary_opening. 0. Parameters input : array_like Array over which the grayscale opening is to be computed. 0. 0. grey_closing. 0.3). 0. 0]. optional Structuring element used for the grayscale opening. output : array. 0.0. origin : scalar. 0]. 0]]) scipy. 0. Release 0. footprint=footprint) array([[0. [0. structure=None. 0. mode : {‘reﬂect’. 1. Multi-dimensional image processing (scipy. 0]. False]]. Default is ‘reﬂect’ cval : scalar.12. 1. [0. [0. size=None. 0. mode=’reﬂect’. size : tuple of ints Shape of a ﬂat and full structuring element used for the grayscale opening. 0]. dtype=bool) >>> # Diagonally-connected elements are not considered neighbors >>> ndimage. 0. 0. 0.10. structure : array of ints. optional An array used for storing the ouput of the opening may be provided.0. optional The origin parameter controls the placement of the ﬁlter. 2.grey_erosion(a.ndimage. generate_binary_structure grey_erosion. [0. 2. 0.’nearest’.’mirror’. 0. 0].

8. 29]. 27.morphology. 2. See Also: generate_binary_structure Examples >>> struct = ndimage. iterations. size=(3. 8. 19. 31. 7. 0]. 3.SciPy Reference Guide.10. 28. 3. iterations : int number of dilations performed on the structure with itself origin : optional If origin is None.reshape((6. 13. [24. 32. 19.3)) array([[ 0. a tuple of the iterated structure and the modiﬁed origin is returned. 10]. Returns output: ndarray of bools : A new structuring element obtained by dilating structure (iterations . 16. 1. Release 0. 33. 25. 10. [30. 27. 13. Reference . 4. 15. 1) >>> struct. 25. 4. [ 6. 28]. 22. 25. 26. 26. References [R52] Examples >>> a = np. 20. 27. whereas binary opening erases small objects. [1. 23]. 16]. 15. 9.0rc1 Notes The action of a grayscale opening with a ﬂat structuring element amounts to smoothen high local maxima. Parameters structure : array_like Structuring element (an array of bools. 10. 26. to be dilated with itself.arange(36).ndimage. 1. If not. 1. 28]]) >>> # Note that the local maximum a[3. [24. origin=None) Iterate a structure by dilating it with itself. [ 6. [12. 14. 1. 7. [24. 5]. 1]. for example). 9. [18. 11]. 28. 3] = 50 >>> a array([[ 0. 20.astype(int) array([[0. 16.1) times with itself. 2. 22.generate_binary_structure(2. 34. 28. 35]]) >>> ndimage. [12.iterate_structure(structure. [0. 1. [18. 14.6)) >>> a[3. 22].grey_opening(a. 4]. 17].3] has disappeared scipy. 50. only the iterated structure is returned. 22. 0]]) 374 Chapter 4.

[0. 1. [0. 0. A larger size yields a more blurred gradient. optional Structuring element used for the morphology operations. 4. 1. 1. where cval is the value when mode is equal to ‘constant’. 0]. 1. 0. 0.iterate_structure(struct. output=None. 1. 0. 1. 1. [0. 0]]) scipy. optional The origin parameter controls the placement of the ﬁlter.0.SciPy Reference Guide. 3). origin=0) Multi-dimensional morphological gradient. footprint=None. 1. ‘wrap’}. 1. 0]. footprint : array of ints. Larger footprints give a more blurred morphological gradient. 0. 2). optional The mode parameter determines how the array borders are handled. 0. Default is ‘reﬂect’ cval : scalar. 0]. The morphological gradient is calculated as the difference between a dilation and an erosion of the input with a given structuring element. optional An array used for storing the ouput of the morphological gradient may be provided. 1. 1.morphological_gradient(input.0. 1. Release 0.’constant’. mode : {‘reﬂect’. 1. 0]. mode=’reﬂect’. 1. 1. Default is 0. output : array. Parameters input : array_like Array over which to compute the morphlogical gradient. size : tuple of ints Shape of a ﬂat and full structuring element used for the mathematical morphology operations.morphology.’mirror’. 1. 0]. 0]]) >>> ndimage. 1]. 1. [0. Default 0 Returns output : ndarray Morphological gradient of input. 1. 1. 0. 1.astype(int) array([[0.ndimage) 375 .ndimage. 1. [1. 0. 0.iterate_structure(struct. cval=0. 0. [0. 0. structure=None. size=None. [1. 1. structure : array of ints. 1. structure may be a non-ﬂat structuring element. 1. 1. optional Value to ﬁll past edges of input if mode is ‘constant’. 1. 1. 1. 1. 0. Optional if footprint is provided. 1]. 0]. 1. Multi-dimensional image processing (scipy. [0.10. [0.12. 1. origin : scalar. 0.’nearest’. 1.0rc1 >>> ndimage. 1. optional Positions of non-inﬁnite elements of a ﬂat structuring element used for the morphology operations. 0]. 0.astype(int) array([[0. 1. 0. [0. 0. 0].

0]. 1.. 2. 1. 1. 3. 0]. [0. 0]. 1. 1. 1. 0. 1. [0.grey_erosion(a. 0. 1. 0. 1. [0. [0. 1. 0]. 1. 3. [0. 1. 1. 0. 0. 1. [0. 1. 1. 1. 0. dtype=np. 0]. 0.0. 1. grey_erosion. 1. origin=0) 376 Chapter 4. 1. 1. [0. 0. 1. 2. [0. 0.3)) array([[0. 1. 0].7). 1. 0. 3. 1. 1. 0. 2:5] = 1 >>> a[4. 0]. 0. 0. [0. size=(3. [0. 1.4] = 2.int) >>> a[2:5. 0. 2. 1. 2. 0.zeros((7. 0. [0. 1. 1. 0. 1. 1. 1. 1.morphological_gradient(a. 0]. [0. 0. Reference . 1. [0. output=None. size=(3. 2. [0. 2. 1. 1. 1. 0]. 1. 1. 0]. 0. 0. a[2. 0.. 1. 1. 0.morphological_laplace(input. 3. 0.3)) array([[0. cval=0. ndimage. 1. [0. 1. 1.ndimage. 2. 3. 0. [0. 2. 0]]) >>> ndimage.morphology. 0. 1. 0. 0. 3. 0]. 0. structure=None. 0. 1. 2:5] = 1 >>> ndimage. 0. 2. 0.SciPy Reference Guide.10. 0. 0. 0]. 0]]) >>> # The morphological gradient is computed as the difference >>> # between a dilation and an erosion >>> ndimage.7). 1. dtype=np. 0. 0]. 0].int) >>> a[2:5. 0]. 1.0rc1 See Also: grey_dilation. mode=’reﬂect’. 0]. 0. 0. 0. 1. size=(3. 0. 0. 0]. size=None. [0. 0. 0]. 0. 0]. Release 0. 0. 1. [0. 3. 0. 1. 0. size=(3. 1. 0]. 0. 0]. 1. 3. 0. [0. 0. 0. 0. [0. 1. 0]]) scipy.morphological_gradient(a. 0. 1.3)) -\ . 0. 0]. 0. 1. 1. 0. 0.3] = 3 >>> a array([[0.zeros((7.3)) array([[0. 1. footprint=None.grey_dilation(a. 0. References [R53] Examples >>> a = np. 1. 1. [0. ndimage. 3. 1.gaussian_gradient_magnitude Notes For a ﬂat structuring element. 0]. 0. [0. 0. the morphological gradient computed at a given point corresponds to the maximal difference between elements of the input among the elements covered by the structuring element centered on the point. [0. 1. 0]]) >>> a = np. [0. 1. 0. 0].

covy. delta0.odr) 4.. beta0. ﬂatten=False) Load an image from ﬁle. scipy.13 Orthogonal distance regression (scipy. meta]) Model(fcn[.0. x[. or the structure must be provided. sy..1 Package Content odr(fcn. test. fjacb. where cval is the value when mode is equal to ‘constant’. e. convert the output to grey-scale.ndimage. iﬁxb. Returns img_array : ndarray The different colour bands/channels are stored in the third dimension. beta0.white_tophat(input. output=None. Parameters fname : str Image ﬁle name. size=None. Default is False. Either a size or a footprint.]) Output(output) RealData(x[. sx. or the structure must be provided. . wd. an RGB-image MxNx3 and an RGBA-image MxNx4. covx. An output array can optionally be provided. model[. An output array can optionally be provided. wd. structure=None. ﬂatten : bool. Raises ImportError : If the Python Imaging Library (PIL) can not be imported. Orthogonal distance regression (scipy.g.jpg. ﬂatten]) Load an image from ﬁle. The mode parameter determines how the array borders are handled. 4. ﬁx.ndimage. optional If true... y. Release 0. where cval is the value when mode is equal to ‘constant’. meta]) odr_error odr_stop The ODR class gathers all information and coordinates the running of the The Data class stores the data to ﬁt.]) Data(x[. footprint=None. ﬁx.]) ODR(data. mode=’reﬂect’.10. origin=0) Multi-dimensional white tophat ﬁlter. 4.13.. extra_args.13.morphology. fjacd. The origin parameter controls the placement of the ﬁlter. The mode parameter determines how the array borders are handled. y. such that a grey-image is MxN. fjacb. The Model class stores information about the function you wish to ﬁt. we.. The RealData class stores the weightings as actual standard deviations 4. we. Either a size or a footprint.SciPy Reference Guide. .12. scipy.6 Utility imread(fname[. .0rc1 Multi-dimensional morphological laplace. y.odr) 377 . The Output class stores the output of an ODR run. The origin parameter controls the placement of the ﬁlter.imread(fname. cval=0.

optional string with the ﬁlename to print ODRPACK errors to. sstol=None. Members of instances of the ODR class have the same names as the arguments to the initialization routine. See pp.x iﬁxb : array_like of ints of rank-1. taufac=None. errﬁle : str. Reference . optional integer specifying the number of reliable digits in the computation of the function. iprint : int. Release 0. optional an integer telling ODRPACK what tasks to perform. iwork=None) The ODR class gathers all information and coordinates the running of the main ﬁtting routine. stpb=None. Use the method set_job post-initialization for a more readable interface. job=None. beta0=None.x that determines which input observations are treated as ﬁxed. partol=None. Do Not Open This File Yourself! ndigit : int. optional a (double-precision) ﬂoat array to hold the initial values of the errors in the input variables. work=None. 31 of the ODRPACK User’s Guide if you absolutely must set the value here. Do Not Open This File Yourself! rptﬁle : str. iﬁxb=None. errﬁle=None. maxit=None.x. iﬁxx=None. job : int.SciPy Reference Guide. optional an array of integers with the same shape as data. iﬁxx : array_like of ints with same shape as data. 33-34 of the ODRPACK User’s Guide if you absolutely must set the value here. A value of 0 ﬁxes the observation.0rc1 class scipy. model.10. delta0=None. rptﬁle=None. optional 378 Chapter 4.odr. iprint=None. A value of 0 ﬁxes the parameter. Optional if model provides an “estimate” function to estimate these values. stpd=None. sclb=None. optional string with the ﬁlename to print ODRPACK summaries to. scld=None. a value > 0 makes it free. optional an integer telling ODRPACK what to print. One can use a sequence of length m (the dimensionality of the input observations) to ﬁx some dimensions for all observations. Must be same shape as data. taufac : ﬂoat. delta0 : array_like of ﬂoats of rank-1. See p. optional sequence of integers with the same length as beta0 that determines which parameters are held ﬁxed.ODR(data. ndigit=None. a value > 0 makes the parameter free. Parameters data : Data class instance instance of the Data class model : Model class instance instance of the Model class beta0 : array_like of rank-1 a rank-1 sequence of initial parameter values. Use the method set_iprint postinitialization for a more readable interface.

shape == (m. The default value is 1. The initial trust region is equal to taufac times the length of the ﬁrst computed Gauss-Newton step.shape or scld. Orthogonal distance regression (scipy. The default value is eps**(2/3) for explicit models and eps**(1/3) for implicit models.odr) 379 . these factors are automatically computed if you do not provide them. work : ndarray. Specify them yourself if the automatic procedure goes awry. optional array (scld.shape == data.work.SciPy Reference Guide. output : Output class instance an instance if the Output class containing all of the returned data from an invocation of ODR. then the values are broadcast to all observations. For ﬁrst runs.shape or stpd.shape == (m.). sstol must be less than 1. sclb : array_like. optional ﬂoat specifying the tolerance for convergence based on the relative change in the estimated parameters. then the scaling factors are broadcast to all observations. optional integer specifying the maximum number of iterations to perform. stpb : array_like. optional sequence (len(stpb) == len(beta0)) of scaling factors for the parameters. partol : ﬂoat. takes the value of self. maxit is the total number of iterations performed and defaults to 50.restart() 4. partol must be less than 1. Normally appropriate scaling factors are computed if this argument is not speciﬁed.13. scld : array_like. Again. If scld. maxit is the number of additional iterations to perform and defaults to 10.0rc1 ﬂoat specifying the initial trust region.shape == data. takes the value of self.run() or ODR. Release 0. For restarts. The purpose of these scaling factors are to scale all of the parameters to around unity. maxit : int.10. The default value is eps**(1/2) where eps is the smallest value such that 1 + eps > 1 for double precision computation on the machine. iwork : ndarray. When restarting. optional sequence (len(stpb) == len(beta0)) of relative step sizes to compute ﬁnite difference derivatives wrt the parameters.)) of scaling factors for the errors in the input variables. When restarting.output.shape == (m.output. stpd : optional array (stpd. If stpd is a rank-1 array with length m (the dimensionality of the input variable). optional array to hold the double-valued working data for ODRPACK. optional array to hold the integer-valued working data for ODRPACK.x.x. sstol : ﬂoat. taufac must be less than 1. optional ﬂoat specifying the tolerance for convergence based on the relative change in the sum-of-squares.iwork.)) of relative step sizes to compute ﬁnite difference derivatives wrt the input variable errors.

0rc1 Methods restart run set_iprint set_job class scipy. job=None. stpb=None.SciPy Reference Guide. sclb=None. optional an array of integers with the same shape as data. errﬁle=None.ODR(data. optional string with the ﬁlename to print ODRPACK errors to. Use the method set_iprint postinitialization for a more readable interface. iﬁxx=None.x. rptﬁle=None. Optional if model provides an “estimate” function to estimate these values. Parameters data : Data class instance instance of the Data class model : Model class instance instance of the Model class beta0 : array_like of rank-1 a rank-1 sequence of initial parameter values. work=None. optional an integer telling ODRPACK what to print. model. job : int. iprint : int. optional sequence of integers with the same length as beta0 that determines which parameters are held ﬁxed. stpd=None. scld=None. iﬁxb=None. See pp. sstol=None. maxit=None. beta0=None. Use the method set_job post-initialization for a more readable interface. ndigit=None. Members of instances of the ODR class have the same names as the arguments to the initialization routine. See p. iwork=None) The ODR class gathers all information and coordinates the running of the main ﬁtting routine. delta0=None.10. partol=None. Must be same shape as data. optional 380 Chapter 4. A value of 0 ﬁxes the parameter.odr. a value > 0 makes it free. 31 of the ODRPACK User’s Guide if you absolutely must set the value here. iﬁxx : array_like of ints with same shape as data. 33-34 of the ODRPACK User’s Guide if you absolutely must set the value here. Reference . errﬁle : str. delta0 : array_like of ﬂoats of rank-1. a value > 0 makes the parameter free. optional a (double-precision) ﬂoat array to hold the initial values of the errors in the input variables. Do Not Open This File Yourself! rptﬁle : str. A value of 0 ﬁxes the observation. taufac=None.x iﬁxb : array_like of ints of rank-1.x that determines which input observations are treated as ﬁxed. iprint=None. optional an integer telling ODRPACK what tasks to perform. Release 0. One can use a sequence of length m (the dimensionality of the input observations) to ﬁx some dimensions for all observations.

Again.odr) 381 . Do Not Open This File Yourself! ndigit : int. taufac : ﬂoat. If scld. optional array (scld. sstol : ﬂoat. maxit : int. optional sequence (len(stpb) == len(beta0)) of scaling factors for the parameters.shape or scld. sstol must be less than 1. For restarts.shape == (m.0rc1 string with the ﬁlename to print ODRPACK summaries to. Release 0. optional integer specifying the number of reliable digits in the computation of the function. optional ﬂoat specifying the tolerance for convergence based on the relative change in the estimated parameters. takes the value of self. work : ndarray. The default value is 1.shape == data. For ﬁrst runs. The default value is eps**(1/2) where eps is the smallest value such that 1 + eps > 1 for double precision computation on the machine. iwork : ndarray. Normally appropriate scaling factors are computed if this argument is not speciﬁed. scld : array_like.work. optional array to hold the double-valued working data for ODRPACK.x.shape == (m. maxit is the number of additional iterations to perform and defaults to 10.shape == data. then the values are broadcast to all observations.)) of scaling factors for the errors in the input variables. The initial trust region is equal to taufac times the length of the ﬁrst computed Gauss-Newton step. optional ﬂoat specifying the tolerance for convergence based on the relative change in the sum-of-squares. maxit is the total number of iterations performed and defaults to 50.SciPy Reference Guide. The default value is eps**(2/3) for explicit models and eps**(1/3) for implicit models. these factors are automatically computed if you do not provide them. optional integer specifying the maximum number of iterations to perform.13.)) of relative step sizes to compute ﬁnite difference derivatives wrt the input variable errors. stpd : optional array (stpd.output.shape == (m. The purpose of these scaling factors are to scale all of the parameters to around unity.). taufac must be less than 1.shape or stpd. stpb : array_like.10. optional 4. optional sequence (len(stpb) == len(beta0)) of relative step sizes to compute ﬁnite difference derivatives wrt the parameters. optional ﬂoat specifying the initial trust region. then the scaling factors are broadcast to all observations. Specify them yourself if the automatic procedure goes awry. sclb : array_like.x. partol : ﬂoat. When restarting. If stpd is a rank-1 array with length m (the dimensionality of the input variable). partol must be less than 1. Orthogonal distance regression (scipy.

then the i’th element is the weight for the i’th response variable observation (single-dimensional only). wd=None. q. then this vector is the diagonal of the covariant weighting matrix for all data points. If wd is a rank-2 array of shape (m. ﬁx=None. wd : array_like. m). takes the value of self. If we is a rank-1 array of length n (the number of data points). then that value is used for all data points (and all dimensions of the response variable). then only a positive scalar value is used.:.i] is the diagonal of the covariant weighting matrix for the i’th observation.0rc1 array to hold the integer-valued working data for ODRPACK. then this is the full covariant weighting matrix broadcast to each observation. y : array_like. If we is a rank-2 array of shape (q. we=None.:.output. Parameters x : array_like Input data for regression. optional If wd is a scalar. If wd is a rank-2 array of shape (m.run() or ODR. we : array_like.10. then we[:. Reference . optional The ﬁx argument is the same as iﬁxx in the class ODR. output : Output class instance an instance if the Output class containing all of the returned data from an invocation of ODR. n). It is an array of integers with the same shape as data. n).i] is the diagonal of the covariant weighting matrix for the i’th observation. When restarting. If the ﬁt is implicit. then this is the full covariant weighting matrix broadcast to each observation.x that determines which input observations are treated as 382 Chapter 4.odr. then the i’th element is the weight for the i’th input variable observation (single-dimensional only). meta={}) The Data class stores the data to ﬁt. Release 0. If wd = 0. ﬁx : array_like of ints. m.SciPy Reference Guide. q). then the covariant weighting matrix for each observation is set to the identity matrix (so each dimension of each observation has the same weight). n). then this vector is the diagonal of the covariant weighting matrix for all data points. If we is a rank-2 array of shape (q.restart() Methods restart run set_iprint set_job class scipy. then wd[:. optional Input data for regression. then that value is used for all data points (and all dimensions of the input variable).i] is the full speciﬁcation of the covariant weighting matrix for each observation.i] is the full speciﬁcation of the covariant weighting matrix for each observation. then we[:. If we is a rank-3 array of shape (q. If we is a rank-1 array of length q (the dimensionality of the response variable). optional If we is a scalar. If wd is a rank-1 array of length n (the number of data points). then wd[:. If wd is a rank-1 array of length m (the dimensionality of the input variable). y=None. n).Data(x.iwork. If wd is a rank-3 array of shape (m.

at the least. fjacd=None. The we argument weights the effect a deviation in the response variable has on the ﬁt. These arguments heavily use the structured arguments feature of ODRPACK to conveniently and ﬂexibly support all options. speciﬁes that the model is implicit. i. Orthogonal distance regression (scipy.SciPy Reference Guide.e fcn(beta. To handle multidimensional inputs and responses easily. a higher value of the weight for a particular data point makes a deviation at that point more detrimental to the ﬁt. Also. (beta. The wd argument weights the effect a deviation in the input variable has on the ﬁt.B)/@x_j extra_args : tuple.B)/@B_j fjacd : function Jacobian of fcn wrt the (possibly multidimensional) input variable. meta : dict. and optionally stores functions which compute the Jacobians used during ﬁtting. Methods set_meta class scipy. fjacb=None. one can provide a function that will provide reasonable starting values for the ﬁt parameters possibly given the set of data. x) ~= 0 and there is no y data to ﬁt against 4. fjacb(beta. If y is an integer. extra_args=None. x) –> @f_i(x.0rc1 ﬁxed. estimate=None. optional Freeform dictionary for metadata.13. One can use a sequence of length m (the dimensionality of the input observations) to ﬁx some dimensions for all observations. A value of 0 ﬁxes the observation. Each will be called by apply(fcn. the structure of these arguments has the n’th dimensional axis ﬁrst. x) + extra_args) estimate : array_like of rank-1 Provides estimates of the ﬁt parameters from the data estimate(data) –> estbeta implicit : boolean If TRUE. and fjacd.odr) 383 . fjacb. The structures of x and y are described in the Model class docstring. meta=None) The Model class stores information about the function you wish to ﬁt. It stores the function itself. a value > 0 makes it free. then the Data instance can only be used to ﬁt with implicit models where the dimensionality of the response is equal to the speciﬁed value of y. x) –> y fjacb : function Jacobian of fcn wrt the ﬁt parameters beta.Model(fcn. fjacd(beta.10. extra_args should be a tuple of extra arguments to pass to fcn. implicit=0. Parameters fcn : function fcn(beta. x) –> @f_i(x. Release 0. Notes Each argument is attached to the member of the instance of the same name.odr. Basically. See the ODRPACK User’s Guide for a full explanation of how these weights are used in the algorithm. optional If speciﬁed.

The estimate object takes an instance of the Data class. y = array([[2. only the return array’s shape is (q. If q == 1. m is the dimensionality of the input data. n is the number of observations. beta = array([B_1.e. .]).. 3.e..10. If m == q == 1. n).. then the return array’s shape is (q.shape = (q.).) If the input data is multi-dimensional.e. Takes one argument for initialization. 4. then the return array is only rank-2 and with shape (p. 6.SciPy Reference Guide. the return value from the function odr. Release 0.0rc1 meta : dict. 384 Chapter 4. the shape is (q. i.]]). beta – rank-1 array of length p where p is the number of parameters. If m == 1. m.. 2. . x. i. then y is a rank-2 array. . then y is a rank-1 array.i] = @f_l(X. Methods set_meta class scipy.]..]]).]. B_p]) fjacb – if the response variable is multi-dimensional. i..beta)[l. Here are the rules for the shapes of the argument and return arrays : x – if the input data is single-dimensional. n) such that fjacb(x. fjacd – as with fjacb.Output(output) The Output class stores the output of an ODR run.. y. y = array([2. i. then x is a rank-2 array.) If the response variable is multidimensional. n) In all cases.. . y.B)/@X_j at the i’th data point. x = array([1. [3. i.B)/@B_k evaluated at the i’th data point. the shape is (n. n) where q is the dimensionality of the response variable..i] = @f_l(X. then the return array’s shape is (m. and fjacd operate on NumPy arrays and return a NumPy array.e..e.odr..shape = (n.]).shape = (n. [2. 4. Notes The attributes listed as “optional” above are only present if odr was run with full_output=1. . 4. then x is rank-1 array...beta)[l.. n).. Reference . x = array([[1. . 2. optional freeform dictionary of metadata for the model Notes Note that the fcn. If q == 1. B_2.k..j. n) such that fjacd(x. fjacb. y – if the response variable is single-dimensional...shape = (m. . p. x. it has the same shape as the input data array passed to odr(). n).

meta={}) The RealData class stores the weightings as actual standard deviations and/or covariances. covx=None. Residual variance.ﬁx and ODR. Array y = fcn(x + delta). 77). Standard errors of the estimated parameters. Relative error in function values computed within fcn./numpy. sy=None. Sum of squares of eps error.inv(covy[i]) # i in range(len(covy)) # if covy. optional int. Sum of squares error. ODRPACK UG p. covy=None. sx and sy are standard deviations of x and y and are converted to weights by dividing 1. Covariance matrix of the estimated parameters.13. optional ﬂoat.g. Orthogonal distance regression (scipy. Release 0. but covx and covy can be. optional ﬂoat. The weights needed for ODRPACK are generated on-the-ﬂy with __getattr__ trickery.g. optional ndarray. 2) covx and covy are arrays of covariance matrices and are converted to weights by performing a matrix inversion on each observation’s covariance matrix.p). Reason for returning. ODRPACK UG p. & dictionaries recursively. as output by ODRPACK (cf. optional ﬂoat. Array of estimated errors in response variables. Setting both will raise an exception.SciPy Reference Guide.q) These arguments follow the same structured argument conventions as wd and we only restricted by their natures: sx and sy can’t be rank-3. wd = 1. of same shape as y. optional ﬂoat. optional Estimated parameter values.0rc1 Attributes beta sd_beta cov_beta delta eps xplus y res_var sum_sqare sum_square_delta sum_square_eps inv_condnum rel_error work work_ind info stopreason Methods pprint Support to pretty-print lists.odr.linalg. class scipy.odr_error exception scipy. The argument and member ﬁx is the same as Data. 38).0 by their squares. optional ndarray.). we[i] = numpy. 83).q. a value > 0 makes it free. Methods set_meta exception scipy.RealData(x. E. ndarray ndarray ndarray ndarray. A value of 0 ﬁxes the observation.x that determines which input observations are treated as ﬁxed.odr. Final work array. optional ﬂoat. optional ndarray. Indices into work for drawing out values (cf. ODRPACK UG p.10. Inverse condition number (cf. Array of x + delta.shape == (n. optional dict.odr. optional ndarray. Sum of squares of delta error. of same shape as x. ﬁx=None.odr) 385 .iﬁxx: It is an array of integers with the same shape as data. info interpreted into English. Only set either sx or covx (not both). tuples. optional ﬂoat. sx=None. y=None.odr_stop 4. of shape (p. One can use a sequence of length m (the dimensionality of the input observations) to ﬁx some dimensions for all observations. optional list of str. Array of estimated errors in input variables. of shape (p. E. of shape (q.).power(sx. Same with sy and covy.

fprime. and can even reduce to the OLS case if that is sufﬁcient for the problem..optimize. maxiter.]) fmin_bfgs(f. gtol.]) fmin_powell(func. x0. norm.a. callback=None) Minimize a function using the downhill simplex algorithm. Minimize a function using modiﬁed Powell’s method.]) fmin_cg(f. Weights can be provided to account for different variances of the observations. fprime[. disp=1. Furthermore. . sometimes making the equation explicit is impractical and/or introduces errors. args=(). Ordinary Least Squares (OLS) ﬁtting procedures treat the data for explanatory variables as ﬁxed. ftol.14 Optimization and root ﬁnding (scipy. While the docstring of the function odr does not have a full explanation of its arguments. args. norm.optimize) 4. Unconstrained minimization of a function using the Newton-CG method. Release 0. not derivatives or second derivatives. ftol=0. It uses a modiﬁed trust-region Levenberg-Marquardt-type algorithm [R152] to estimate the function parameters..fmin(func. This method Minimize a function using a nonlinear conjugate gradient algorithm.]) fmin_ncg(f..0001. .0001. odr provides two interfaces: a single function.. x0[. ODRPACK is a FORTRAN-77 library for performing ODR with possibly non-linear ﬁtting functions. args. not just the response (a. The ODRPACK User’s Guide (linked above) is also quite helpful. Dfun. The ﬁtting functions are provided by Python functions operating on NumPy arrays. full_output.. ODRPACK can do explicit or implicit ODR ﬁts.. x0. or may be estimated numerically.13..1 Optimization General-purpose fmin(func. retall=0.k.10. args. fhess.. Input and output variables may be multi-dimensional.. full_output=0. xtol. not subject to error of any kind.. x0[. xtol. the user is urged to at least skim the ODRPACK User’s Guide . The required derivatives may be provided by Python functions as well. fhess_p. Minimize the sum of squares of a set of equations. This algorithm only uses function values. please refer to their docstrings for more information.k.]) leastsq(func. ftol. “dependent”) variable(s). ODR can handle both of these cases with ease. Minimize a function using the BFGS algorithm. args. and a set of high-level classes that wrap that function. x0[. maxiter=None. “independent”) variable(s). gtol. or it can do OLS. 386 Chapter 4.. fprime. Reference . the classes do. . . and even covariances between dimensions of the variables... References 4.14.e.2 Usage information Introduction Why Orthogonal Distance Regression (ODR)? Sometimes one has measurement errors in the explanatory (a. i.a. and arguments of the same name usually have the same requirements.]) Minimize a function using the downhill simplex algorithm. scipy.. x0[.“Know Thy Algorithm. args. OLS procedures require that the response variables be an explicit function of the explanatory variables.. Furthermore.SciPy Reference Guide.odrpack and the functions and classes for usage instructions.0rc1 4. . x0[. . xtol=0. maxfun=None.” Use See the docstrings of odr.

e. retall : bool 4. iter : int Number of iterations performed. maxfun : number Maximum number of function evaluations to make.10. maxiter : int Maximum number of iterations to perform. where xk is the current parameter vector. args : tuple Extra arguments passed to func. disp : bool Set to True to print convergence messages.14. Optimization and root ﬁnding (scipy. ftol : number Relative error in func(xopt) acceptable for convergence. Returns xopt : ndarray Parameter that minimizes function. fopt : ﬂoat Value of function at minimum: fopt = func(xopt). f(x.SciPy Reference Guide. funcalls : int Number of function calls made.*args) The objective function to be minimized.0rc1 Parameters func : callable func(x. i.*args). as callback(xk). allvecs : list Solution at each iteration. Release 0. 2 : Maximum number of iterations reached. full_output : bool Set to True if fopt and warnﬂag outputs are desired. Other Parameters xtol : ﬂoat Relative error in xopt acceptable for convergence. warnﬂag : int 1 : Maximum number of function evaluations made.optimize) 387 . callback : callable Called after each iteration. x0 : ndarray Initial guess.

there currently is no complete theory describing when the algorithm will successfully converge to the minimum. The Computer Journal. In practice it can have poor performance in high-dimensional problems and is not robust to minimizing complicated functions.optimize.A. retall=0.). This method only uses function values. J. Addison Wesley Longman. Additionally. called after each iteration. funcalls : int Number of function calls made. This algorithm has a long history of successful use in applications. D. xtol=0. callback(xk). “Direct Search Methods: Once Scorned. where xk is the current parameter vector. 191-208. Now Respectable”. in Numerical Analysis 1995. R. Release 0. full_output=0. maxiter=None.10. Watson (Eds. direc : ndarray Current direction set. “A simplex method for function minimization”. pp.fmin_powell(func. maxfun=None. 7. x0 : ndarray Initial guess. iter : int Number of iterations. direc=None) Minimize a function using modiﬁed Powell’s method.0001. (1965). or how fast it will if it does. pp. ftol=0. warnﬂag : int Called as 388 Chapter 4. Parameters func : callable f(x.0rc1 Set to True to return list of solutions at each iteration.0001. Proceedings of the 1995 Dundee Biennial Conference in Numerical Analysis. disp=1. (1996).SciPy Reference Guide. callback=None. 308-313 Wright. callback : callable An optional user-supplied function. UK. Notes Uses a Nelder-Mead simplex algorithm to ﬁnd the minimum of function of one or more variables.H. x0. and Mead. args : tuple Extra arguments passed to func.F. fopt : number Value of function at minimum: fopt = func(xopt). not derivatives. Grifﬁths and G. M. args=(). scipy. Harlow. direc : ndarray Initial direction set. Returns xopt : ndarray Parameter which minimizes func.*args) Objective function to be minimized. References Nelder. Reference .A. But it will usually be slower than an algorithm that uses ﬁrst or second derivative information.

2. 4. retall : bool If True. args=().0000000000000001e-05. The algorithm has two loops. Powell’s method is a conjugate direction method. and Flannery B. print convergence messages. return a list of the solution at each iteration. retall=0. if certain conditions are met. Notes Uses a modiﬁcation of Powell’s method to ﬁnd the minimum of a function of N variables. References Powell M. x0.10. maxiter : int Maximum number of iterations to perform. 2 : Maximum number of iterations.optimize. direc. maxfun : int Maximum number of function evaluations to make.. Cambridge University Press scipy.A. disp : bool If True. (1964) An efﬁcient method for ﬁnding the minimum of a function of several variables without calculating derivatives. gtol=1. Optimization and root ﬁnding (scipy..The direction of greatest increase accounted for a large sufﬁcient fraction of the decrease in the function value from that iteration of the inner loop.fmin_cg(f. Other Parameters xtol : ﬂoat Line-search error tolerance. and warnﬂag are returned. disp=1.4901161193847656e-08.SciPy Reference Guide. The technical conditions for replacing the direction of greatest increase amount to checking that 1.D. norm=inf.T. 7 (2):155-162. xi. ftol : ﬂoat Relative error in func(xopt) acceptable for convergence. full_output : bool If True. Release 0. fprime=None. The outer loop merely iterates over the inner loop. the direction that gave the largest decrease is dropped and replaced with the difference between the current estiamted x and the estimated x from the beginning of the inner-loop. callback=None) Minimize a function using a nonlinear conjugate gradient algorithm. Computer Journal. Teukolsky S.optimize) 389 . maxiter=None. epsilon=1.No further gain can be made along the direction of greatest increase from that iteration.0rc1 Integer warning ﬂag: 1 : Maximum number of function evaluations.. iter.P.14. At the end of the inner loop. allvecs : list List of solutions at each iteration. Press W. Vetterling W. fopt.: Numerical Recipes (any edition). funcalls. full_output=0. The inner loop minimizes over each current direction in the direction set.J.

f(xopt). and warnﬂag in addition to xopt. Other Parameters maxiter : int Maximum number of iterations to perform. gtol : ﬂoat Stop when norm of gradient is less than gtol. x0 : ndarray Initial guess.*args) Function which computes the gradient of f. args : tuple Extra arguments passed to f and fprime. allvecs : ndarray If retall is True (see other parameters below). Called as callback(xk). norm : ﬂoat Order of vector norm to use. then this vector containing the result at each iteration is returned. use this value for the step size (can be scalar or vector). callback : callable An optional user-supplied function. i.SciPy Reference Guide. full_output : bool If True then return fopt. 390 Chapter 4. Reference . f(xopt) == fopt.0rc1 Parameters f : callable f(x. called after each iteration. epsilon : ﬂoat or ndarray If fprime is approximated. Release 0.10. fopt : ﬂoat Minimum value found. grad_calls : int The number of gradient calls made. where xk is the current parameter vector. func_calls : int The number of function_calls made. warnﬂag : int 1 : Maximum number of iterations exceeded. grad_calls. Inf is max.*args) Objective function to be minimized.e. -Inf is min. Returns xopt : ndarray Parameters which minimize f. fprime : callable f’(x. func_calls. 2 : Gradient and/or function calls not changing.

callback : callable An optional user-supplied function to call after each iteration. 1999. f’(xopt). gtol=1. Parameters f : callable f(x. maxiter=None.optimize) 391 . retall : bool Return a list of results at each iteration if True. callback=None) Minimize a function using the BFGS algorithm. gopt : ndarray Value of gradient at minimum. scipy. full_output=0.10. x0 : ndarray Initial guess. f.e. norm=inf.14. 120-122. whose gradient is given by fprime using the nonlinear conjugate gradient algorithm of Polak and Ribiere.0000000000000001e-05. pg. 4. x0.e. f(xopt) == fopt. gtol : ﬂoat Gradient norm must be less than gtol before succesful termination. fprime=None.4901161193847656e-08. Called as callback(xk). norm : ﬂoat Order of norm (Inf is max. Returns xopt : ndarray Parameters which minimize f.SciPy Reference Guide.*args) Objective function to be minimized.optimize.fmin_bfgs(f. the inverse hessian matrix. epsilon=1. i. See Wright & Nocedal. use this value for the step size. fopt : ﬂoat Minimum value. -Inf is min) epsilon : int or ndarray If fprime is approximated. Optimization and root ﬁnding (scipy. i. disp=1. retall=0.0rc1 disp : bool Print convergence message if True. Notes Optimize the function. Release 0. Bopt : ndarray Value of 1/f’‘(xopt). args=(). ‘Numerical Optimization’. fprime : callable f’(x. args : tuple Extra arguments passed to f and fprime. which should be near 0. where xk is the current parameter vector.*args) Gradient of f.

fhess : callable fhess(x.4901161193847656e-08. Release 0. maxiter=None.p.10. fprime. Goldfarb. 392 Chapter 4. Other Parameters maxiter : int Maximum number of iterations to perform. Fletcher. full_output : bool If True. whose gradient is given by fprime using the quasi-Newton method of Broyden. fhess_p : callable fhess_p(x. grad_calls : int Number of gradient calls made. epsilon=1. and Nocedal ‘Numerical Optimization’. callback=None) Unconstrained minimization of a function using the Newton-CG method. warnﬂag : integer 1 : Maximum number of iterations exceeded. Notes Optimize the function.fmin_ncg(f. scipy.*args) Function which computes the Hessian of f times an arbitrary vector. Reference .*args) Gradient of f. pg. func_calls. and Shanno (BFGS) References Wright. fhess_p=None. avextol=1.*args) Objective function to be minimized.SciPy Reference Guide. fhess=None. Parameters f : callable f(x. 1999. x0 : ndarray Initial guess. args=(). and warnﬂag in addition to xopt. p. retall=0. f.optimize.return fopt. retall : bool Return a list of results at each iteration if True. 2 : Gradient and/or function calls not changing. allvecs : list Results at each iteration. x0.*args) Function to compute the Hessian matrix of f. grad_calls. 198.0000000000000001e-05. full_output=0. disp : bool Print convergence message if True.0rc1 func_calls : int Number of function_calls made. Only returned if retall is True. disp=1. fprime : callable f’(x.

Other Parameters avextol : ﬂoat Convergence is assumed when the average relative error in the minimizer falls below this amount. fhess_p. 1 : Maximum number of iterations exceeded. Release 0. epsilon : ﬂoat or ndarray If fhess is approximated.e. fprime.optimize) 393 . allvecs : list The result at each iteration.SciPy Reference Guide. maxiter : int Maximum number of iterations to perform.14. fopt : ﬂoat Value of the function at xopt. gcalls : int Number of gradient calls made. f(xopt) == fopt. fcalls : int Number of function calls made. return a list of results at each iteration. disp : bool If True. fopt = f(xopt). where xk is the current parameter vector. print convergence message. and fhess (the same set of extra arguments is supplied to all of these functions). retall : bool If True. hcalls : int Number of hessian calls made.10. Returns xopt : ndarray Parameters which minimizer f. warnﬂag : int Warnings generated by the algorithm. callback : callable An optional user-supplied function which is called after each iteration.e. full_output : bool If True. i. if retall is True (see below). 4. return the optional outputs. use this value for the step size.0rc1 args : tuple Extra arguments passed to f. Optimization and root ﬁnding (scipy. i. Called as callback(xk).

optimize. ‘Numerical Optimization’. maxfev=0. epsfcn=0. gtol=0.optimize. (Box constraints give lower and upper bounds for each variable seperately.fmin_ncg is only for unconstrained minimization while scipy.leastsq(func.fmin_ncg is written purely in python using numpy and scipy while scipy. scipy.0. Reference .axis=0)) y This function differs from Parameters func : callable should take at least one (possibly length N vector) argument and returns M ﬂoating point numbers. because there is no transpose operation). full_output=0. fhess_p must compute the hessian times an arbitrary vector. pg.) References Wright & Nocedal.49012e-08. 2.fmin_tnc calls a C function.optimize. the Jacobian will be estimated.optimize.scipy.fmin_tnc because 1. then the hessian product will be approximated using ﬁnite differences on fprime. ftol=1.49012e08. args=(). full_output : bool non-zero to return all optional outputs. 140. xtol : ﬂoat Relative error desired in the approximate solution. If it is not given. x = arg min(sum(func(y)**2. 394 Chapter 4. ftol : ﬂoat Relative error desired in the sum of squares.optimize. ﬁnite-differences on fprime are used to compute it. col_deriv=0. If neither fhess nor fhess_p is provided. xtol=1. Dfun=None. args : tuple Any extra arguments to func are placed in this tuple.optimize. x0 : ndarray The starting estimate for the minimization.SciPy Reference Guide. then fhess_p will be ignored.fmin_tnc is for unconstrained minimization or box constrained minimization.10.scipy. diag=None) Minimize the sum of squares of a set of equations. Dfun : callable A function or method to compute the Jacobian of func with derivatives across the rows. x0. col_deriv : bool non-zero to specify that the Jacobian function computes derivatives down the columns (faster. 1999. Newton-CG methods are also called truncated Newton methods.0. If this is None. factor=100. Release 0.0rc1 Notes Only one of fhess_p or fhess need to be given. If fhess is provided. scipy.

ier : int An integer ﬂag. p. the solution was found. then 100*(N+1) is the maximum where N is the number of elements in x0. diag : sequence N positive entries that serve as a scale factors for the variables.optimize) 395 . such that fjac*p = q*r. where r is upper triangular with diagonal elements of nonincreasing magnitude.0rc1 gtol : ﬂoat Orthogonality desired between the function vector and the columns of the Jacobian. Optimization and root ﬁnding (scipy. .SciPy Reference Guide.10. This matrix must be multiplied by the residual standard deviation to get the covariance of the parameter estimates – see curve_ﬁt. cov_x : ndarray Uses the fjac and ipvt optional outputs to construct an estimate of the jacobian around the solution.’ipvt’ : an integer array of length N which defines a permutation matrix. None if a singular matrix encountered (indicates very ﬂat curvature in some direction). 2. 4. If epsfcn is less than the machine precision. If zero.14.’qtf’ : the vector (transpose(q) * fvec).’fvec’ : the function evaluated at the output . epsfcn : ﬂoat A suitable step length for the forward-difference approximation of the Jacobian (for Dfun=None). stored column wise. If it is equal to 1. maxfev : int The maximum number of calls to the function. factor : ﬂoat A parameter determining the initial step bound (factor * || diag * x||). the covariance of the estimate can be approximated. 100). the optional output variable ‘mesg’ gives more information. Release 0. Should be in interval (0.1. Column j of p is column ipvt(j) of the identity matrix. mesg : str A string message giving information about the cause of failure. .’nfev’ : the number of function calls . the solution was not found. it is assumed that the relative errors in the functions are of the order of the machine precision. Together with ipvt. infodict : dict a dictionary of optional outputs with the key s: . Otherwise. Returns x : ndarray The solution (or the result of the last iteration for an unsuccessful call). In either case. 3 or 4.’fjac’ : A permutation of the R matrix of a QR factorization of the final approximate Jacobian matrix.

]) fmin_slsqp(func.SciPy Reference Guide. using Minimize a function using the Constrained Optimization BY Linear Minimize a function using Sequential Least SQuares Programming Solve argmin_x || Ax . cov_x is a Jacobian approximation to the Hessian of the least squares objective function. m : int 396 Chapter 4. Minimize a function with variables subject to bounds. params) so that the objective function is min params sum((ydata . bounds=None. deﬁning the bounds on that parameter. x0. f_eqcons.fmin_l_bfgs_b(func.. bounds : list (min.. approx_grad : bool Whether to approximate the gradient numerically (in which case func returns only the function value). approx_grad=0. Use None for one of min or max when there is no bound in that direction. pgtol=1.0rc1 Notes “leastsq” is a wrapper around MINPACK’s lmdif and lmder algorithms. .]) fmin_tnc(func. . eqcons. params) func(params) = ydata .]) nnls(A.10. If None. args. epsilon=1e-08. This approximation assumes that the objective function is based on the difference between some observed target data (ydata) and a (non-linear) function of the parameters f(xdata. . . args=(). Release 0. Reference .f(xdata.. m=10.. iprint=-1. factr=10000000. cons[. disp=None) Minimize a function func using the L-BFGS-B algorithm..0000000000000001e-05.*args) The gradient of func. x0[. axis=0) Constrained (multivariate) fmin_l_bfgs_b(func.. *args)). args : sequence Arguments to pass to func and fprime. x0[. x0 : ndarray Initial guess. x0[.f(xdata. unless approx_grad is True in which case func returns only f. x0.0. then func returns the function value and the gradient (f.]) fmin_cobyla(func.. args. b) Minimize a function func using the L-BFGS-B algorithm.b ||_2 for x>=0. fprime. max) pairs for each element in x. g = func(x. fprime : callable fprime(x.. params))**2. args. fprime.optimize. This is a wrapper scipy. fprime=None. maxfun=15000. Parameters func : callable f(x.*args) Function to minimise.

maxfun : int Maximum number of function evaluations.f^{k+1})/max{|f^k|. It carries the following condition for use: This software is freely available.0rc1 The maximum number of variable metric corrections used to deﬁne the limited memory matrix. – 1 if too many function evaluations. Typical values for factr are: 1e12 for low accuracy. 10. and Jorge Nocedal <nocedal@ece..1 (released in 1997). • d[’warnﬂag’] is – 0 if converged. Richard Byrd.nwu.14. Optimization and root ﬁnding (scipy. which is automatically generated by the code. n} <= pgtol where pg_i is the i-th component of the projected gradient.optimize) 397 .0 for extremely high accuracy.1} <= factr * eps. for numerically calculating the gradient iprint : int Controls the frequency of output. where eps is the machine precision. iprint < 0 means no output. disp : int.. epsilon : ﬂoat Step size used when approx_grad is True. It was written by Ciyou Zhu. quote at least one of the references given below. pgtol : ﬂoat The iteration will stop when max{|proj g_i | i = 1. 1e7 for moderate accuracy. Notes License of L-BFGS-B (Fortran code): The version included here (in fortran code) is 2. then this over-rides iprint. d : dict Information dictionary. but we expect that all publications describing work using this software . given in d[’task’] • d[’grad’] is the gradient at the minimum (should be 0 ish) • d[’funcalls’] is the number of function calls made. Returns x : array_like Estimated position of the minimum.SciPy Reference Guide. 4. – 2 if stopped for another reason. or all commercial products using it. then no output.. . Release 0. If positive number. f : ﬂoat Value of func at the minimum.|f^{k+1}|.edu>. optional If zero.10.) factr : ﬂoat The iteration stops when (f^k . (The limited memory BFGS method does not store the full hessian but uses this many terms in an approximation to it.

pp.0rc1 References •R. *args)) or approx_grad must be True. approx_grad : bool If true. scale=None. Release 0. Return the function value but supply gradient function seperately as fprime 3. Must do one of 1. •C. ACM Transactions on Mathematical Software. scale : list of ﬂoats Scaling factors to apply to each variable. pp. args : tuple Arguments to pass to function. epsilon: ﬂoat : Used if approx_grad is True. If None. 1190-1208. If None. the offsets are (up+low)/2 for interval bounded variables and x for the others. pgtol=-1. where f is the value of the function and g its gradient (a list of ﬂoats). the factors are up-low for interval bounded variables and 1+|x] fo the others. rescale=-1. maxCGit=1. This method wraps a C implementation of the algorithm. maxfun=None. FORTRAN routines for large scale bound constrained optimization (1997). Lu and J. epsilon=1e-08. Reference .fmin_tnc(func. H. bounds : list (min. 4. fmin=0. (1995). Return the function value and set approx_grad=True. P. offset=None. A Limited Memory Algorithm for Bound Constrained Optimization. disp=None) Minimize a function with variables subject to bounds. Parameters func : callable func(x. fprime : callable fprime(x. SIAM Journal on Scientiﬁc and Statistical Computing . x0. messages : : Bit mask used to select messages display during minimization values deﬁned in the MSGS dict. Nocedal. 398 Chapter 4. Num. using gradient information in a truncated Newton algorithm. Use None or +/-inf for one of min or max when there is no bound in that direction. max) pairs for each element in x0. Byrd. messages=15.560. stepmx=0. then either func must return the function value and the gradient (f. Zhu. Byrd and J. *args) Function to minimize. H. 550 . Vol 23. 16. Return f and g. If None. eta=-1.g = func(x. 5. scipy. fprime=None.optimize. Defaults to None offset : ﬂoat Value to substract from each variable. approximate the gradient numerically. bounds=None. approx_grad=0.SciPy Reference Guide. args=(). x0 : list of ﬂoats Initial estimate of minimum. Defaults to MGS_ALL. accuracy=0. R. xtol=-1. ftol=-1.10. the minimization is aborted. If the function returns None. deﬁning the bounds on that parameter. L-BFGS-B: Algorithm 778: L-BFGS-B. The stepsize in a ﬁnite difference approximation for fprime. *args) Gradient of func. Nocedal. 2.

10. If too small. rc : int Return code as deﬁned in the RCSTRINGS dict. never rescale. Defaults to -1. eta : ﬂoat Severity of the line search. stepmx : ﬂoat Maximum step for the line search. if None. nfeval : int The number of function evaluations. If a large value. rescale is set to 1. fmin : ﬂoat Minimum function value estimate.0. pgtol is set to 1e-2 * sqrt(accuracy). If 0. maxfun : int Maximum number of function evaluation. maxCGit is set to max(1. Defaults to 0. 5 = all messages maxCGit : int Maximum number of hessian*vector evaluations per main iteration. Optimization and root ﬁnding (scipy.25. if < 0 or > 1. xtol is set to sqrt(machine_precision).0. Setting it to 0. If pgtol < 0. Defaults to 0. Returns x : list of ﬂoats The solution. ftol is set to 0. rescale at each iteration. If maxCGit == 0.min(50.0rc1 disp : int Integer interface to messages.0 defaults to -1. Defaults to -1. maxfun is set to max(100.0 is not recommended. Defaults to 0. set to 0. 10*len(x0)).3.optimize) 399 . accuracy : ﬂoat Relative precision for ﬁnite difference calculations. Defaults to -1. set to sqrt(machine_precision).SciPy Reference Guide. May be increased during call. it will be set to 10. If ftol < 0. rescale : ﬂoat Scaling factor (in log10) used to trigger f value rescaling.0. If < 0.0. If <= machine_precision. Release 0.14. If xtol < 0. ftol : ﬂoat Precision goal for the value of f in the stoping criterion. Defaults to None. 4.n/2)). pgtol : ﬂoat Precision goal for the value of the projected gradient in the stopping criterion (after applying x scaling factors). xtol : ﬂoat Precision goal for the value of x in the stopping criterion (after applying x scaling factors). the direction chosen is -gradient if maxCGit < 0. 0 = no message. Defaults to -1.

must all be >=0 (a single function if only 1 constraint). This is a lower bound on the size of the trust region.optimize. consargs=None. Reference . 770-778 scipy. 1.optimize. Release 0. rhobeg : : Reasonable initial changes to the variables. rhoend : : Final accuracy in the optimization (not precisely guaranteed). A constraint is considered no longer active is if it is currently active but the gradient for that variable points inward from the constraint. Use () for no extra arguments. Parameters func : callable Function to minimize. Nocedal J. Deprecated. rhobeg=1.fmin_cobyla(func. 3} Controls the frequency of output.. disp=None) Minimize a function using the Constrained Optimization BY Linear Approximation (COBYLA) method. consargs : tuple Extra arguments to pass to constraint functions (default of None means use same extra arguments as those passed to func). *args).SciPy Reference Guide.It wraps a C implementation of the algorithm 2. maxfun=1000. 400 Chapter 4. iprint=1. “Newton-Type Minimization Via the Lanczos Method”. 0 implies no output. The algorithm keeps track of a set of currently active constraints.0. args : tuple Extra arguments to pass to function. (The x’s associated with the active constraint are kept ﬁxed. In the form func(x. cons.0001. iprint : {0.It allows each variable to be given an upper and lower bound. x0 : ndarray Initial guess. (1984). but never taking a step-size large enough to leave the space of feasible x’s. The speciﬁc constraint removed is the one associated with the variable of largest index whose constraint is no longer active. also called Newton Conjugate-Gradient.) If the maximum allowable step size is zero then a new constraint is added. and ignores them when computing the minimum allowable step size. At the end of each iteration one of the constraints may be deemed no longer active and removed. This method wraps a FORTRAN implentation of the algorithm. cons : sequence Constraint functions. (2006). References Wright S. args=(). SIAM Journal of Numerical Analysis 21. The algorithm incoporates the bound constraints by determining the descent direction as in an unconstrained truncated Newton. 2. This method differs from scipy.0rc1 Notes The underlying algorithm is truncated Newton. rhoend=0. x0.10. pp. ‘Numerical Optimization’ Nash S.fmin_ncg in that 1.G. Each function takes the parameters x as its ﬁrst argument.

.. and a radius RHO_j. linear plus a constant) approximations to the objective function and constraint functions such that their function values agree with the linear approximation on the k+1 points v_1. (1994). v_(k+1). an approximate solution x_j. However the linear approximations are likely only good approximations near the current simplex.y) = x*y subject to the constraints x**2 + y**2 < 1 and y > 0: >>> .. Kluwer Academic (Dordrecht). We brieﬂy describe the algorithm.10.. . . “A view of algorithms for optimization without derivatives”.14. the linear program may be inconsistent.SciPy Reference Guide. rhoend=1e-7) Normal return from subroutine COBYLA NFVALS = 64 F =-5.J.. (i. [constr1. References Powell M..J.000000E-01 MAXCV = 1.. or the approximation may give poor improvement. constr2].e. (1998). 1. Notes This algorithm is based on linear approximations to the objective function and each constraint. in Advances in Optimization and Numerical Analysis. The initial RHO_j is rhobeg and the ﬁnal RHO_j is rhoend. “Direct search algorithms for optimization calculations”.998401E-14 4.. ....(x[0]**2 + x[1]**2) def constr2(x): return x[1] fmin_cobyla(objective.D. 51-67 Powell M. RHO_j only decreases. >>> def objective(x): return x[0]*x[1] def constr1(x): return 1 .. Preferred. Returns x : ndarray The argument that minimises f. Release 0. Acta Numerica 7. At the jth iteration the algorithm has k+1 points v_1. must be within RHO_j from x_j. . v_(k+1). In this way COBYLA’s iterations behave like a trust region algorithm.. This gives a linear program to solve (where the linear approximations of the constraint functions are constrained to be non-negative). >>> . 3} Over-rides the iprint interface. Optimization and root ﬁnding (scipy. pp. For details about how these issues are resolved. 2..”..D. never increases.. [0.. 0.1]. (2007). “A direct search optimization method that models the objective and constraint functions by linear interpolation.0rc1 disp : {0.J. 287-336 Powell M. so the linear program is given the further requirement that the solution.optimize) 401 . eds. Suppose the function is being minimized over k variables. maxfun : int Maximum number of function evaluations. Cambridge University Technical Report DAMTP 2007/NA03 Examples Minimize the objective function f(x. as well as how the points v_i are updated. which will become x_(j+1). S.D. Additionally.. >>> .0. refer to the source code or the references below. Gomez and J-P Hennart.

eqcons : list A list of functions of length n such that eqcons[j](x. disp=None. If not provided. The array returned by fprime_ieqcons should be sized as ( len(ieqcons). Reference . f_ieqcons : callable f(x. *args) that returns the m by n array of inequality constraint normals. fprime_eqcons : callable f(x.*args) Returns a 1-D ndarray in which each element must be greater or equal to 0. ieqcons=[ ].70710671]) The exact solution is (-sqrt(2)/2. If not provided.0 in a successfully optimized problem. bounds=[].*args) >= 0.SciPy Reference Guide. Release 0.9999999999999995e-07. If f_eqcons is speciﬁed. fprime=None. x0 : 1-D ndarray of ﬂoat Initial guess for the independent variable(s). f_ieqcons=None.*args) Returns a 1-D array in which each element must equal 0.4901161193847656e-08) Minimize a function using Sequential Least SQuares Programming Python interface function for the SLSQP Optimization subroutine originally implemented by Dieter Kraft. acc=9.*args) A function of the form f(x.fmin_slsqp(func.*args) Objective function. xu1).0rc1 X =-7.*args) A function of the form f(x. args : sequence 402 Chapter 4. args=(). 7. scipy.0 in a successfully optimized problem.. xu0).optimize.. *args) that returns the m by n array of equality constraint normals.10. x0. f_eqcons=None. f_eqcons : callable f(x.0 in a successfully optimized problem.] fprime : callable f(x. fprime_ieqcons=None. iprint=1. the normals will be approximated. sqrt(2)/2). epsilon=1. Parameters func : callable f(x.*args) A function that evaluates the partial derivatives of func. If f_ieqcons is speciﬁed.70710685. len(x0) ). ieqcons : list A list of functions of length n such that ieqcons[j](x. len(x0) ). fprime_eqcons=None. fprime_ieqcons : callable f(x. ieqcons is ignored.071067E-01 0. bounds : list A list of tuples specifying the lower and upper bound for each independent variable [(xl0. eqcons is ignored.0 in a successfully optimized problem. eqcons=[]. full_output=0. iter=100.*args) == 0. the normals will be approximated.. The array returned by fprime_eqcons should be sized as ( len(eqcons).071069E-01 array([-0.(xl1.

Returns out : ndarray of ﬂoat The ﬁnal minimizer of func.14. if full_output is true Message describing the exit mode from the optimizer.10. Optimization and root ﬁnding (scipy. if full_output is true The ﬁnal value of the objective function. Release 0.0rc1 Additional arguments passed to func and fprime. Notes Exit modes are deﬁned as follows -1 0 1 2 3 4 5 6 7 : : : : : : : : : Gradient evaluation required (g & a) Optimization terminated successfully.optimize) 403 . return only the minimizer of func (default).SciPy Reference Guide. fx : ndarray of ﬂoat. Otherwise. acc : ﬂoat Requested accuracy. epsilon : ﬂoat The step size for ﬁnite-difference derivative estimates. smode : string. iter : int The maximum number of iterations. if full_output is true The exit mode from the optimizer (see below). output ﬁnal objective function and summary information. if full_output is true The number of iterations. full_output : bool If False. its : int. Function evaluation required (f & c) More equality constraints than independent variables More than 3*n iterations in LSQ subproblem Inequality constraints incompatible Singular matrix E in LSQ subproblem Singular matrix C in LSQ subproblem Rank-deficient equality constraint subproblem HFTI 4. imode : int. iprint : int The verbosity of fmin_slsqp : • iprint <= 0 : Silent operation • iprint == 1 : Print summary upon completion (default) • iprint >= 2 : Print status of each iterate and summary disp : int Over-rides the iprint interface (preferred).

n=1. Returns x : ndarray Solution vector..]) Minimize a function using simulated annealing. full_output.. feps=9.b ||_2 for x>=0. Available ones are ‘fast’. rnorm : ﬂoat The residual.optimize.]) brute(func. schedule. args=(). learn_rate=0. *args) Function to be optimized. x0[. Schedule is a schedule class implementing the annealing schedule.0. ‘cauchy’. x0.5.J. Ns. Tf=9.nnls(A. upper=100. lower=-100.0.0. Notes The FORTRAN code was published in the book below. It solves the KKT (Karush-Kuhn-Tucker) conditions for the non-negative least squares problem.anneal(func. b : ndarray Right-hand side vector.. scipy. maxiter=400. References Lawson C. The algorithm is an active set method. ranges[. args.SciPy Reference Guide. full_output=0. maxaccept=None.optimize. b) Solve argmin_x || Ax . This is a wrapper for a FORTAN non-negative least squares solver. Hanson R.. quench=1. SIAM Global anneal(func. (1987) Solving Least Squares Problems. maxeval=None. m=1. Minimize a function over a given range by brute force.. schedule=’fast’. args. dwell=50) Minimize a function using simulated annealing..0.0rc1 8 : Positive directional derivative for linesearch 9 : Iteration limit exceeded Examples Examples are given in the tutorial. Parameters A : ndarray Matrix A as shown above. args : tuple 404 Chapter 4. . boltzmann=1.10.9999999999999998e-13. x0 : ndarray Initial guess. || Ax-b ||_2. Reference . .9999999999999995e-07. scipy. ‘boltzmann’ Parameters func : callable f(x. T0=None. Release 0.

Jmin : ﬂoat Minimum value of function found. n : ﬂoat Parameters to alter fast_sa schedule. maxeval : int Maximum function evaluations. m. quench. dwell : int The number of times to search the space at each temperature. Optimization and root ﬁnding (scipy. Returns xmin : ndarray Point giving smallest value found. Tf : ﬂoat Final goal temperature.SciPy Reference Guide. maxaccept : int Maximum changes to accept.0rc1 Extra parameters to func. full_output : bool Whether to return optional outputs. learn_rate : ﬂoat Scale constant for adjusting guesses. feps : ﬂoat Stopping relative error tolerance for the function value in last four coolings. T : ﬂoat Final temperature. lower. feval : int 4.14.10. maxiter : int Maximum cooling iterations.optimize) 405 .2 times the largest cost-function deviation over random points in the range). T0 : ﬂoat Initial Temperature (estimated as 1. schedule : base_schedule Annealing schedule to use (a class). upper : ﬂoat or ndarray Lower and upper bounds on x. Release 0. boltzmann : ﬂoat Boltzmann constant in acceptance test (increase for less stringent test at each temperature).

SciPy Reference Guide. as there are usually better algorithms for continuous optimization problems.0rc1 Number of function evaluations. iters : int Number of cooling iterations. The randomness in the algorithm comes from random sampling in numpy.10. pi/2.5) * T * ((1+ 1/T)**abs(2u-1) -1.anneal.anneal itself. let d denote the dimension of the inputs to func. Reference . In the ‘fast’ schedule the updates are u ~ Uniform(0.random. All other variables not deﬁned below are input variables to scipy. let x_old denote the previous state. 1.0) xc = y * (upper . In practice it has been more useful in discrete optimization than continuous optimization. and new points are generated for every iteration in the inner loop.optimize. Release 0.lower) x_new = x_old + xc c = n * exp(-n * quench) T_new = T0 * exp(-c * k**quench) In the ‘cauchy’ schedule the updates are u ~ Uniform(-pi/2. To obtain the same results you can call numpy. retval : int Flag indicating stopping condition: 0 1 2 3 4 5 : : : : : : Points no longer changing Cooled to final temperature Maximum function evaluations Maximum cooling iterations reached Maximum accepted query locations reached Final point not the minimum amongst encountered points Notes Simulated annealing is a random algorithm which uses no derivative information from the function being optimized. Also. The inner loop is over loop over xrange(dwell).) For readability. (Though whether the proposed new points are accepted is probabilistic. Some experimentation by trying the difference temperature schedules and altering their parameters is likely required to obtain good performance. Temperatures are only updated with iterations in the outer loop.0.optimize. accept : int Number of tests accepted.seed with the same seed immediately before calling scipy. size=d) xc = learn_rate * T * tan(u) x_new = x_old + xc T_new = T0 / (1+k) In the ‘boltzmann’ schedule the updates are 406 Chapter 4. size=d) y = sgn(u . We give a brief description of how the three temperature schedules generate new points and vary their temperature. and k denote the iteration number of the outer loop.

and take take args. See Notes for more details. Jout : ndarray Function values over grid: Jout = func(*grid). Notes The range is respected by the brute force minimization.optimize) 407 . use finish=None. std. It has the same length as x0.*args) Objective function to be minimized.mgrid. the returned value may still be (just) outside the range. Returns x0 : ndarray Value of arguments to func. (upper-lower) / (3*learn_rate) ) y ~ Normal(0. full_output=0. grid : tuple Representation of the evaluation grid. ranges. Ns : int Default number of samples.0rc1 std = minimum( sqrt(T) * ones(d). return the evaluation grid. Release 0. if those are not provided. In order to ensure the range is speciﬁed. Parameters func : callable f(x. ﬁnish should take the initial guess as positional argument. giving minimum over the grid. args=().SciPy Reference Guide. Optimization and root ﬁnding (scipy. full_output : bool If True. 4. but if the ﬁnish keyword speciﬁes another optimization function (including the default fmin). fval : int Function value at minimum.14. Ns=20. args : tuple Extra arguments passed to function. ranges : tuple Each element is a tuple of parameters or a slice object to be handed to numpy. size=d) x_new = x_old + learn_rate * y T_new = T0 / log(1+k) scipy.optimize. ﬁnish=<function fmin at 0x4cc26b0>) Minimize a function over a given range by brute force.brute(func. full_output and disp as keyword arguments. optional An optimization function that is called with the result of brute force minimization as initial guess. ﬁnish : callable.10.

x2 : ﬂoat or array scalar The optimization bounds.]) Bounded minimization for scalar functions.*args) Objective function to be minimized (must accept and return scalars). disp : int If non-zero.0rc1 Scalar function minimizers fminbound(func. . . grow_limit. scipy. xtol : ﬂoat The convergence tolerance. Parameters func : callable f(x..0000000000000001e-05. print messages.. xtol. tol.]) brent(func[. x1. args. maxfun=500. brack. maxfun : int Maximum number of function evaluations allowed. args. return the minimum of the function isolated to a fractional precision of tol. ierr : int An error ﬂag (0 if converged. 1 if maximum number of function calls reached). x2. x1. 2 : print a message on convergence too. xb. xb.]) golden(func[.10. 0 : no message printing. Reference . full_output=0. full_output]) bracket(func[.. search in the downhill direction (as deﬁned by the initital points) and return new points xa. xtol=1..SciPy Reference Guide.optimize.. full_output : bool If True.fminbound(func. Release 0. Returns xopt : ndarray Parameters (over given interval) which minimize the objective function. xc that bracket the minimum of the function f(xa) > f(xb) < f(xc). numfunc : int The number of function calls made. 1 : non-convergence notiﬁcation messages only. disp=1) Bounded minimization for scalar functions. full_output. . brack. xa. x1. return the minimum of the function isolated to a fractional precision of tol. 408 Chapter 4. args : tuple Extra arguments passed to function. Given a function of one-variable and a possible bracketing interval. 3 : print iteration results. tol. x2[. return optional outputs.. args. Given a function and distinct initial points. Given a function of one-variable and a possible bracketing interval. fval : number The function value at the minimum point. args=(). args.

full_output=0) Given a function of one-variable and a possible bracketing interval. it doesn’t always mean that the obtained solution will satisfy a<=x<=c.14. Returns xmin : ndarray Optimum point. brack : tuple 4.b.optimize. tol=1. brack : tuple Triple (a.*args) Objective function. If bracket consists of two numbers (a. brack=None.golden(func. iter : int Number of iterations. funcalls). maxiter=500) Given a function of one-variable and a possible bracketing interval. Release 0.SciPy Reference Guide.c) then they are assumed to be a starting interval for a downhill bracket search (see bracket). fval. funcalls : int Number of objective function evaluations made.func(c). iter. scipy. return the minimum of the function isolated to a fractional precision of tol.brent(func. return the minimum of the function isolated to a fractional precision of tol. args : tuple Additional arguments (if present). (See brent for auto-bracketing).optimize) 409 .48e-08. full_output=0. scipy.c) where (a<b<c) and func(b) < func(a). Optimization and root ﬁnding (scipy.4901161193847656e-08. passed to func.10. Parameters func : callable f(x. fval : ﬂoat Optimum value.0rc1 Notes Finds a local minimizer of the scalar function func in the interval x1 < xopt < x2 using Brent’s method. args=(). tol=1.*args) Objective function to minimize. Parameters func : callable func(x. full_output : bool If True. args : Additional arguments (if present). args=(). brack=None. return all output args (xmin.optimize. Notes Uses inverse parabolic interpolation when possible to speed up convergence of golden section method.

SciPy Reference Guide. If bracket consists of two numbers (a. maxiter=1000) Given a function and distinct initial points.curve_fit(f. Assumes ydata = f(xdata. 4.0. xdata. sigma]) Use non-linear least squares to ﬁt a function. grow_limit=110. to data. xc : ﬂoat Bracket. to data.*args) Objective function to minimize.0. scipy. xc that bracket the minimum of the function f(xa) > f(xb) < f(xc). fa.b. Reference . xa=0. xb : ﬂoat Bracketing interval. fb. xb. where (a<b<c) and func(b) < func(a). then they are assumed to be a starting interval for a downhill bracket search (see bracket). It doesn’t always mean that obtained solution will satisfy xa<=x<=xb Parameters func : callable f(x. Returns xa.2 Fitting curve_fit(f. fc : ﬂoat Objective function values in bracket. *params) + eps 410 Chapter 4. maxiter : int Maximum number of iterations to perform.0rc1 Triple (a.bracket(func.optimize. **kw[. funcalls : int Number of function evaluations made.func(c). ydata. scipy. it doesn’t always mean that obtained solution will satisfy a<=x<=c. f.c). p0.10. return optional outputs.optimize. p0=None. f. xa. passed to func. grow_limit : ﬂoat Maximum grow limit. tol : ﬂoat x tolerance stop criterion full_output : bool If True. Release 0. args : tuple Additional arguments (if present). ydata. xb=1. sigma=None.14. **kw) Use non-linear least squares to ﬁt a function. args=(). xdata.0. search in the downhill direction (as deﬁned by the initital points) and return new points xa. c). Notes Uses analog of bisection method to decrease the bracketed interval. xb.

) p0 : None. yn) 4.linspace(0. Release 0. See Also: leastsq Notes The algorithm uses the Levenburg-Marquardt algorithm through leastsq. will be used as weights in the least-squares problem. This vector. scalar. .50) >>> y = func(x.5.optimize) 411 ...ydata is minimized pcov : 2d array The estimated covariance of popt. if given. c): .. it represents the standard-deviation of ydata. The diagonals provide the variance of the parameter estimate. f(x.10. or M-length sequence Initial guess for the parameters. *popt) . .5) >>> yn = y + 0. Returns popt : array Optimal values for the parameters so that the sum of the squared error of f(xdata.14. Examples >>> import numpy as np >>> from scipy. xdata : An N-length sequence or an (k. It must take the independent variable as the ﬁrst argument and the parameters to ﬁt as separate remaining arguments. return a*np. 0. pcov = curve_fit(func.SciPy Reference Guide. If None. 1.N)-shaped array for functions with k predictors. x. then the initial values will all be 1 (if the number of parameters for the function can be determined using introspection..4.exp(-b*x) + c >>> x = np..2*np.. The independent variable where the data is measured.0rc1 Parameters f : callable The model function. b.optimize import curve_fit >>> def func(x. a. ydata : N-length sequence The dependent data — nominally f(xdata.3. Optimization and root ﬁnding (scipy.).normal(size=len(x)) >>> popt.random. Additional keyword arguments are passed directly to that algorithm. sigma : None or N-length sequence If not None. otherwise a ValueError is raised). 2.

4408920985006262e16. . Should be >= 0. and r is a RootResults object. Generally considered the best of the rootﬁnding routines here. r). .. Another description is at http://mathworld.b].wolfram. It is a safe version of the secant method that uses inverse quadratic extrapolation.SciPy Reference Guide.. Must be >= 0. f must be continuous. maxiter : number. Brent (1973) claims convergence is guaranteed for functions computable within [a. Another description can be found in a recent edition of Numerical Recipes. optional containing extra arguments for the function f.]) ridder(f. a. maxiter. Release 0. . args.. Description: Uses the classic Brent (1973) method to ﬁnd a zero of the function f on the sign changing interval [a .]) newton(func.0rc1 4. full_output : bool. . The routine modiﬁes this to take into account the relative precision of doubles. a. Reference f is called by apply(f.brentq(f. xtol. It should be easy to understand the algorithm just by reading our code.]) brenth(f. b[. where x is the root. and error is raised. Brent’s method combines root bracketing. Our code diverges a bit from standard presentations: we choose a different formula for the extrapolation step. and [a. the return value is (x. full_output=False. args. a. maxiter]) Find a root of a function in given interval. xtol=9.14.9999999999999998e-13. Parameters f : function Python function returning a number.html. Return ﬂoat. disp=True) Find a root of a function in given interval. b. (x)+args). optional if convergence is not achieved in maxiter iterations. maxiter. a. Find a zero using the Newton-Raphson or secant method. xtol. xtol : number. b]. rtol. scipy. . a zero of f between a and b. [Brent1973] provides the classic description of the algorithm. optional The routine converges when a root is known to lie within xtol of the value return.. maxiter.b]. args : tuple. a. xtol. args.com/BrentsMethod. a : number One end of the bracketing interval [a. f must be a continuous function. the root is returned. xtol. maxiter. fprime.b]. optional If full_output is False. x0[. b[. Find a root of a function in an interval.10. It is sometimes known as the van Wijngaarden-Deker-Brent method. 412 Chapter 4. If full_output is True. maxiter=100. and f(a) and f(b) must have opposite signs.b]. Find root of f in [a.]) bisect(f.. b[. rtol. b : number The other end of the bracketing interval [a.b]. rtol=4.. args=(). rtol. b[. and inverse quadratic interpolation. tol. including [PressEtal1992]. Find root of f in [a. rtol..b] must be a sign changing interval.3 Root ﬁnding Scalar functions brentq(f.. args. interval bisection. args.optimize.

The version here is by Chuck Harris. full_output=False. rtol=4. See Also: multivariate fmin. fmin_tnc.10.optimize. fmin_cobyla global anneal. args=(). fmin_bfgs. Parameters f : function Python function returning a number. maxiter=100. fmin_cg. xtol=9. r.optimize) 413 . bracket n-dimensional fsolve one-dimensional brentq.. In particular.4408920985006262e16. golden. Returns x0 : ﬂoat Zero of f between a and b.9999999999999998e-13. ridder. and f(a) and f(b) must have opposite signs. optional If True. f must be continuous. r : RootResults (present if full_output = True) Object containing information about the convergence.SciPy Reference Guide. References [Brent1973].0rc1 disp : bool. Optimization and root ﬁnding (scipy. brute local fminbound.brenth(f. f(a) and f(b) can not have the same signs. raise RuntimeError if the algorithm didn’t converge. fmin_ncg nonlinear leastsq constrained fmin_l_bfgs_b. but not as heavily tested. bisect. It is a safe version of the secant method that uses hyperbolic extrapolation. disp=True) Find root of f in [a.. fmin_powell.b]. There was a paper back in the 1980’s .14. f(a) and f(b) must have opposite signs. brent. newton scalar ﬁxed_point Notes f must be continuous. a. Release 0. a : number 4. brenth. Generally on a par with the brent routine. b. A variation on the classic Brent routine to ﬁnd a zero of the function f between the arguments a and b that uses hyperbolic extrapolation instead of inverse quadratic extrapolation.converged is True if the routine converged. [PressEtal1992] scipy.

r). anneal. full_output : bool. fmin_cobyla. fmin_cg leastsq nonlinear least squares minimizer fmin_l_bfgs_b.10. Returns x0 : ﬂoat Zero of f between a and b. r : RootResults (present if full_output = True) Object containing information about the convergence. fminbound. xtol : number. Should be >= 0. disp=True) Find a root of a function in an interval. golden. maxiter : number. optional The routine converges when a root is known to lie within xtol of the value return. brute. optional if convergence is not achieved in maxiter iterations. brent. optional If full_output is False.0rc1 One end of the bracketing interval [a. 414 Chapter 4. maxiter=100. fmin_tnc.b]. bisect. where x is the root. b : number The other end of the bracketing interval [a.converged is True if the routine converged. Parameters f : function f is called by apply(f. b.ridder(f. and error is raised. args=().4408920985006262e16. full_output=False. args : tuple. the return value is (x. raise RuntimeError if the algorithm didn’t converge. The routine modiﬁes this to take into account the relative precision of doubles. ridder. xtol=9. optional containing extra arguments for the function f.optimize.9999999999999998e-13. bracket fsolve n-dimensional root-ﬁnding brentq. a. fmin_powell. and r is a RootResults object.SciPy Reference Guide. disp : bool. See Also: fmin. the root is returned. If full_output is True. Reference . brenth. rtol=4. r. In particular. Must be >= 0. newton fixed_point scalar ﬁxed-point ﬁnder scipy. optional If True. (x)+args).b]. Release 0.

r : RootResults (present if full_output = True) Object containing information about the convergence. and error is raised. r.b]. maxiter : number. and f(a) and f(b) must have opposite signs. xtol : number. f is called by apply(f. but not generally as fast as the Brent rountines. optional If True. bisect. If full_output is True.b]. optional if convergence is not achieved in maxiter iterations. Returns x0 : ﬂoat Zero of f between a and b. Optimization and root ﬁnding (scipy. the root is returned. newton fixed_point scalar ﬁxed-point ﬁnder Notes Uses [Ridders1979] method to ﬁnd a zero of the function f between the arguments a and b. b : number The other end of the bracketing interval [a.SciPy Reference Guide. the return value is (x.10.converged is True if the routine converged. (x)+args). A description can also be found in any recent edition of Numerical Recipes. The routine used here diverges slightly from standard presentations in order to be a bit more careful of tolerance.14. optional containing extra arguments for the function f. optional The routine converges when a root is known to lie within xtol of the value return. f must be continuous. In particular.0rc1 Python function returning a number. Should be >= 0. args : tuple. where x is the root. disp : bool. brenth. See Also: brentq. a : number One end of the bracketing interval [a. Ridders’ method is faster than bisection. Must be >= 0. full_output : bool. 4. raise RuntimeError if the algorithm didn’t converge. Release 0. The routine modiﬁes this to take into account the relative precision of doubles. and r is a RootResults object. optional If full_output is False. [Ridders1979] provides the classic description and source of the algorithm. r).optimize) 415 .

Parameters f : function Python function returning a number. disp=True) Find root of f in [a. b. the root is returned.4408920985006262e16. b : number The other end of the bracketing interval [a. optional If True. r). The routine modiﬁes this to take into account the relative precision of doubles. Release 0. args=(). maxiter=100. Returns x0 : ﬂoat Zero of f between a and b. where x is the root. optional containing extra arguments for the function f. f must be continuous. Reference .bisect(f.b]. and r is a RootResults object. Slow but sure. 416 Chapter 4. optional if convergence is not achieved in maxiter iterations.9999999999999998e-13. In particular. optional The routine converges when a root is known to lie within xtol of the value return. full_output=False. xtol : number. bisect. xtol=9. full_output : bool. and f(a) and f(b) must have opposite signs. args : tuple. Should be >= 0. r : RootResults (present if full_output = True) Object containing information about the convergence. a. (x)+args). Must be >= 0.SciPy Reference Guide.b]. disp : bool. f(a) and f(b) can not have the same signs. rtol=4.optimize. the return value is (x.10. If full_output is True. Basic bisection routine to ﬁnd a zero of the function f between the arguments a and b. and error is raised.b]. brenth. newton fixed_point scalar ﬁxed-point ﬁnder f is called by apply(f. maxiter : number.0rc1 References [Ridders1979] scipy. r. optional If full_output is False.converged is True if the routine converged. raise RuntimeError if the algorithm didn’t converge. See Also: brentq. a : number One end of the bracketing interval [a.

x0 : ﬂoat An initial estimate of the zero that should be somewhere near the actual zero. x0.newton(func.. ridder. args=(). ridder. brenth. Optimization and root ﬁnding (scipy. However.optimize.. maxiter : int.a. xtol. optional The derivative of the function when available and convenient. and bisect. are extra arguments that can be passed in the args parameter.SciPy Reference Guide. This means that if the function is well behaved the actual error in the estimated zero is approximately the square of the requested tolerance up to roundoff error.fixed_point(func. xtol=1e-08. maxiter=50) Find a zero using the Newton-Raphson or secant method.. Release 0. the stopping criterion used here is the step size and there is no guarantee that a zero has been found.b. x0[. tol : ﬂoat.b. optional Maximum number of iterations. Find a zero of the function func given a nearby starting point x0. args=().optimize. Parameters func : function The function whose zero is wanted. x0. otherwise the secant method is used. The brentq algorithm is recommended for general use in one dimensional problems when such an interval has been found. Fixed point ﬁnding: fixed_point(func. Consequently the result should be veriﬁed.optimize) 417 .10.c. maxiter]) Find the point where func(x) == x scipy. brenth. It must be a function of a single variable of the form f(x. where a. Notes The convergence rate of the Newton-Raphson method is quadratic while that of the secant method is somewhat less. optional The allowable error of the zero value. then the secant method is used. If it is None (default). args. Returns zero : ﬂoat Estimated location where function is zero. Safer algorithms are brentq.14. The Newton-Raphson method is used if the derivative fprime of func is provided. fprime=None.). See Also: brentq. but they all require that the root ﬁrst be bracketed in an interval where the function changes sign. args : tuple. bisect fsolve ﬁnd zeroes in n dimensions. maxiter=500) Find the point where func(x) == x 4. fprime : function.c.48e-08.0rc1 fsolve n-dimensional root-ﬁnding scipy. tol=1. optional Extra arguments to be used in the function call..

3]. c1.optimize import fixed_point >>> def func(x. 80 Examples >>> from numpy import sqrt.. **kw[. Uses Steffensen’s Method using Aitken’s Del^2 convergence acceleration. . By default. col_deriv : bool Specify whether the Jacobian function computes derivatives down the columns (faster.]) >>> fixed_point(func. epsfcn=0. args. xtol=1. c2): return sqrt(c1/(x+c2)) >>> c1 = array([10. xin. alpha. using Broyden’s second Jacobian approximation. . Reference .fsolve(func.]) >>> c2 = array([3. because there is no transpose operation). using Broyden’s ﬁrst Jacobian approximation. “Numerical Analysis”. xin. pg. iter. Find a root of a function. **kw[. alpha. Find a root of a function. ﬁnd a ﬁxed-point of the function: i..4920333 . scipy. x0 : ndarray The starting estimate for the roots of func(x) = 0. [1. Release 0. fprime : callable(x) A function to compute the Jacobian of func with derivatives across the rows.2..SciPy Reference Guide.]) broyden1(F. args=().]) broyden2(F. full_output=0. iter. fprime=None. the Jacobian will be estimated. .. See Burden.49012e08.12.optimize. full_output : bool If True. factor=100. return optional outputs. args : tuple Any extra arguments to func. Returns x : ndarray The solution (or the result of the last iteration for an unsuccessful call).e.0rc1 Given a function of one or more variables and a starting point.0. diag=None) Find the roots of a function. array >>> from scipy. args=(c1. *args) A function that takes at least one (possibly vector) argument.. fprime. 1. where func(x)=x.c2)) array([ 1.10. Parameters func : callable f(x. band=None.37228132]) Multidimensional General nonlinear solvers: fsolve(func. x0[. 5th edition. maxfev=0.]) Find the roots of a function.. Faires. Return the roots of the (non-linear) equations deﬁned by func(x) = 0 given a starting estimate. 5. col_deriv=0. infodict : dict 418 Chapter 4. 1. x0.

SciPy Reference Guide. the Jacobi matrix is considered banded (only for fprime=None). max_rank=None. diag : sequence N positive entries that serve as a scale factors for the variables. tol_norm=None. factor : ﬂoat A parameter determining the initial step bound (factor * || diag * x||). stored column wise * ’r’: upper triangular matrix produced by QR factorization of same matrix * ’qtf’: the vector (transpose(q) * fvec) * * * * ’nfev’: ’njev’: ’fvec’: ’fjac’: ier : int An integer ﬂag. Notes fsolve is a wrapper around MINPACK’s hybrd and hybrj algorithms. Set to 1 if a solution was found.optimize. mesg : str If no solution is found. iter=None. then 100*(N+1) is the maximum where N is the number of elements in x0. f_rtol=None. If epsfcn is less than the machine precision. x_rtol=None. mesg details the cause of failure. using Broyden’s ﬁrst Jacobian approximation. Other Parameters xtol : ﬂoat The calculation will terminate if the relative error between two consecutive iterates is at most xtol. maxfev : int The maximum number of calls to the function. **kw) Find a root of a function. it is assumed that the relative errors in the functions are of the order of the machine precision.and super-diagonals within the band of the Jacobi matrix.1. Release 0. produced by the QR factorization of the final approximate Jacobian matrix. xin. 4.0rc1 A dictionary of optional outputs with the keys: number of function calls number of Jacobian calls function evaluated at the output the orthogonal matrix. maxiter=None. 100). otherwise refer to mesg for more information. scipy.broyden1(F. line_search=’armijo’. Should be in the interval (0. band : tuple If set to a two-sequence containing the number of sub. reduction_method=’restart’. f_tol=None. alpha=None.14. q. callback=None. x_tol=None.10. Optimization and root ﬁnding (scipy. verbose=False. This method is also known as “Broyden’s good method”. If zero.optimize) 419 . epsfcn : ﬂoat A suitable step length for the forward-difference approximation of the Jacobian (for fprime=None).

Release 0. reduction_method : str or tuple. optimization is terminated as successful. • simple: drop oldest matrix column. optional Maximum number of iterations to make.. param1. Reference . Default is ‘‘max_rank . no rank reduction). not used. optional Initial guess for the Jacobian is (-1/alpha). If omitted. Has no extra parameters. .10. default is 6e-6. If omitted. make as many as required to meet tolerances. or a tuple of the form (method. Default is inﬁnity (ie.0rc1 Parameters F : function(x) -> f Function whose root to ﬁnd. x_tol : ﬂoat. optional Absolute minimum step size.) that gives the name of the method and values for additional parameters. x0 : array-like Initial guess for the solution alpha : ﬂoat. optional Relative tolerance for the residual. f_rtol : ﬂoat. optional Maximum rank for the Broyden matrix. • svd: keep only the most signiﬁcant SVD components. not used. optional Relative minimum step size. NoConvergence is raised. maxiter : int.. If the step size is smaller than this.. param2. x_rtol : ﬂoat. Methods available: • restart: drop all matrix columns. optional Method used in ensuring that the rank of the Broyden matrix stays low. 420 Chapter 4. If more are needed to meet convergence. Can either be a string giving the name of the method. should take and return an array-like object. Extra parameters: – to_retain‘: number of SVD components to retain when rank reduction is done. If omitted. not used. max_rank : int. f_tol : ﬂoat. verbose : bool. If omitted. Has no extra parameters. If omitted (default). optional Number of iterations to make. optional Absolute tolerance (in max-norm) for the residual.SciPy Reference Guide. optional Print status to stdout on every iteration.2. as determined from the Jacobian approximation. iter : int.

Returns sol : array-like An array (of similar array type as x0) containing the ﬁnal solution. optional Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation.10. callback : function. Default is the maximum norm. alpha=None. x_rtol=None. iter=None. Parameters F : function(x) -> f Function whose root to ﬁnd. x_tol=None. **kw) Find a root of a function.broyden2(F. This method is also known as “Broyden’s bad method”. tol_norm=None. line_search=’armijo’. x0 : array-like Initial guess for the solution alpha : ﬂoat.optimize) 421 . max_rank=None. optional Optional callback function. ‘armijo’ (default). Optimization and root ﬁnding (scipy. maxiter=None. Raises NoConvergence : When a solution was not found. xin. f_rtol=None. reduction_method : str or tuple.optimize. optional 4.SciPy Reference Guide. f) where x is the current solution and f the corresponding residual. It is called on every iteration as callback(x. optional Initial guess for the Jacobian is (-1/alpha).0rc1 tol_norm : function(vector) -> scalar. using Broyden’s second Jacobian approximation. callback=None. Notes This algorithm implements the inverse Jacobian Quasi-Newton update H+ = H + (dx − Hdf )dx† H/(dx† Hdf ) which corresponds to Broyden’s ﬁrst Jacobian update J+ = J + (df − Jdx)dx† /dx† dx References [vR] scipy. ‘wolfe’}. line_search : {None. should take and return an array-like object. f_tol=None. Defaults to ‘armijo’.14. Release 0. optional Norm to use in convergence check. reduction_method=’restart’. verbose=False.

If more are needed to meet convergence. optional Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. 422 Chapter 4. • simple: drop oldest matrix column. • svd: keep only the most signiﬁcant SVD components. optional Maximum number of iterations to make. callback : function.) that gives the name of the method and values for additional parameters. Has no extra parameters. verbose : bool. not used. If omitted.. ‘wolfe’}. Release 0. or a tuple of the form (method. param2..2. as determined from the Jacobian approximation. not used. make as many as required to meet tolerances. optional Print status to stdout on every iteration. tol_norm : function(vector) -> scalar. Default is ‘‘max_rank . ‘armijo’ (default). If omitted. x_rtol : ﬂoat.10. optional Absolute tolerance (in max-norm) for the residual. optional Relative tolerance for the residual. optional Norm to use in convergence check. Has no extra parameters. Extra parameters: – to_retain‘: number of SVD components to retain when rank reduction is done. If omitted. f_rtol : ﬂoat. Can either be a string giving the name of the method. It is called on every iteration as callback(x. no rank reduction).. optional Maximum rank for the Broyden matrix.SciPy Reference Guide. If the step size is smaller than this. Default is the maximum norm. optional Absolute minimum step size. f) where x is the current solution and f the corresponding residual. optional Relative minimum step size.0rc1 Method used in ensuring that the rank of the Broyden matrix stays low. iter : int. If omitted (default). Reference . optional Number of iterations to make. NoConvergence is raised. Methods available: • restart: drop all matrix columns. x_tol : ﬂoat. default is 6e-6. param1. maxiter : int. Defaults to ‘armijo’. Default is inﬁnity (ie. If omitted. optional Optional callback function. optimization is terminated as successful. max_rank : int. not used. . line_search : {None. f_tol : ﬂoat.

‘bicgstab’. using Krylov approximation for inverse Jacobian.sparse. outer_k=10.. ‘minres’} or function Krylov method to use to approximate the Jacobian. iter=None. **kw) Find a root of a function. should take and return an array-like object. Parameters F : function(x) -> f Function whose root to ﬁnd. . For example. iter. The default is scipy. Can be a string. Notes This algorithm implements the inverse Jacobian Quasi-Newton update H+ = H + (dx − Hdf )df † /(df † df ) corresponding to Broyden’s second method. maxiter=None. inner_M : LinearOperator or InverseJacobian Preconditioner for the inner Krylov iteration.newton_krylov(F. rdiff.linalg. Optimization and root ﬁnding (scipy. References [vR] Large-scale nonlinear solvers: newton_krylov(F. Release 0. optional Relative step size to use in numerical differentiation.. callback=None. **kw[. xin.lgmres.. Note that you can use also inverse Jacobians as (adaptive) preconditioners.0rc1 Returns sol : array-like An array (of similar array type as x0) containing the ﬁnal solution.10. or a function implementing the same interface as the iterative solvers in scipy. using (extended) Anderson mixing. xin. M. This method is suitable for solving large-scale problems.SciPy Reference Guide. ‘cgs’. line_search=’armijo’. inner_maxiter=20. x_tol=None. .]) anderson(F. using Krylov approximation for inverse Jacobian.sparse. 4.optimize. x0 : array-like Initial guess for the solution rdiff : ﬂoat. inner_M=None. **kw[. scipy. verbose=False. Raises NoConvergence : When a solution was not found. iter. rdiff=None. f_tol=None. Find a root of a function. method : {‘lgmres’.linalg.optimize) 423 . tol_norm=None. xin. w0. alpha. method=’lgmres’. x_rtol=None.14.. f_rtol=None.]) Find a root of a function. ‘gmres’.

make as many as required to meet tolerances. inner_tol. Returns sol : array-like See Krylov solver. optional Size of the subspace kept across LGMRES nonlinear iterations.linalg. maxiter : int. x_rtol : ﬂoat. optional Number of iterations to make. NoConvergence is raised. If omitted. optimization is terminated as successful. not used.lgmres for details. optional Relative minimum step size.sparse. optional Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. See 424 Chapter 4. ‘armijo’ (default). not used. optional Relative tolerance for the residual. callback : function. . optional Absolute minimum step size.gmres for details. f) where x is the current solution and f the corresponding residual. optional Norm to use in convergence check. verbose : bool. If omitted. If the preconditioner has a method named ‘update’. It is called on every iteration as callback(x. optional Print status to stdout on every iteration. : Parameters to pass on to the “inner” scipy. iter : int.SciPy Reference Guide.0rc1 >>> jac = BroydenFirst() >>> kjac = KrylovJacobian(inner_M=jac. with x giving the current point.. Reference . outer_k : int. If the step size is smaller than this.inverse). Release 0. inner_maxiter. If omitted. f) after each nonlinear step. x_tol : ﬂoat. f_rtol : ﬂoat. Default is the maximum norm.linalg. If omitted. not used. tol_norm : function(vector) -> scalar. optional Optional callback function. scipy. optional Maximum number of iterations to make. If omitted (default).10. as determined from the Jacobian approximation. optional Absolute tolerance (in max-norm) for the residual. If more are needed to meet convergence.. f_tol : ﬂoat. Defaults to ‘armijo’. line_search : {None. default is 6e-6. and f the current function value. ‘wolfe’}. it will be called as update(x.sparse.

linalg.linalg.14. The default here is lgmres.SciPy Reference Guide. alpha=None.01. **kw) Find a root of a function.linalg module offers a selection of Krylov solvers to choose from. See Also: scipy. [Ey] Parameters F : function(x) -> f Function whose root to ﬁnd. Release 0. optional Initial guess for the Jacobian is (-1/alpha). which are conveniently approximated by numerical differentiation: Jv ≈ (f (x + ω ∗ v/|v|) − f (x))/ω Due to the use of iterative matrix inverses. optional Regularization parameter for numerical stability. The basic idea is to compute the inverse of the Jacobian with an iterative Krylov method. [BJM] scipy. line_search=’armijo’.anderson(F.sparse. w0=0. M : ﬂoat. x_rtol=None. should take and return an array-like object.0rc1 An array (of similar array type as x0) containing the ﬁnal solution. For a review on Newton-Krylov methods. These methods require only evaluating the Jacobian-vector products. References [KK]. maxiter=None.01. Defaults to 5. scipy. using (extended) Anderson mixing. and for the LGMRES sparse inverse method. x_tol=None.optimize) 425 . see [BJM].sparse. f_tol=None. As a result. only a MxM matrix inversions and MxN multiplications are required. callback=None. iter=None. tol_norm=None. M=5. Compared to unity.gmres. good values of the order of 0. which is a variant of restarted GMRES iteration that reuses some of the information obtained in the previous Newton steps to invert Jacobians in subsequent steps. Scipy’s scipy. iter : int. The Jacobian is formed by for a ‘best’ solution in the space spanned by last M vectors.sparse.optimize. these methods can deal with large nonlinear problems. optional Number of previous vectors to retain.10. see for example [KK]. optional 4. verbose=False. xin. Optimization and root ﬁnding (scipy. Raises NoConvergence : When a solution was not found. x0 : array-like Initial guess for the solution alpha : ﬂoat. f_rtol=None.lgmres Notes This function implements a Newton-Krylov solver. w0 : ﬂoat.

x_tol : ﬂoat. If omitted. using a scalar Jacobian approximation. not used. Returns sol : array-like An array (of similar array type as x0) containing the ﬁnal solution. optional Optional callback function. iter. 426 Chapter 4. Raises NoConvergence : When a solution was not found. Find a root of a function. optional Absolute tolerance (in max-norm) for the residual. tol_norm : function(vector) -> scalar. xin. using diagonal Broyden Jacobian approximation. It is called on every iteration as callback(x. default is 6e-6. using a tuned diagonal Jacobian approximation. **kw[. If more are needed to meet convergence. x_rtol : ﬂoat. f_rtol : ﬂoat..]) linearmixing(F. optional Relative minimum step size. alpha. . If omitted. optional Absolute minimum step size.. xin. Defaults to ‘armijo’. optional Relative tolerance for the residual. Find a root of a function. .SciPy Reference Guide. line_search : {None. ‘armijo’ (default).. xin. alpha. callback : function. not used. as determined from the Jacobian approximation. If omitted.. optional Maximum number of iterations to make. verbose : bool. If omitted. iter. optimization is terminated as successful. f) where x is the current solution and f the corresponding residual. Reference .. ‘wolfe’}. not used. If the step size is smaller than this. make as many as required to meet tolerances. alpha.10. **kw[. optional Print status to stdout on every iteration. iter. Default is the maximum norm. optional Norm to use in convergence check. Release 0.0rc1 Number of iterations to make.. . maxiter : int.]) Find a root of a function. NoConvergence is raised.]) diagbroyden(F. References [Ey] Simple iterations: excitingmixing(F. **kw[. optional Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. f_tol : ﬂoat. If omitted (default).

optional Print status to stdout on every iteration. x_tol : ﬂoat.optimize. alphamax : ﬂoat. x0 : array-like Initial guess for the solution alpha : ﬂoat. optional Norm to use in convergence check.excitingmixing(F. f_tol=None. maxiter=None. optional Initial Jacobian approximation is (-1/alpha). ‘wolfe’}. but whether it will work may depend strongly on the problem. If more are needed to meet convergence. optional Maximum number of iterations to make. should take and return an array-like object. optional Absolute minimum step size. using a tuned diagonal Jacobian approximation. optional Relative tolerance for the residual.SciPy Reference Guide.optimize) 427 . x_tol=None. Optimization and root ﬁnding (scipy. optional Relative minimum step size. xin. Parameters F : function(x) -> f Function whose root to ﬁnd. If omitted. optional Absolute tolerance (in max-norm) for the residual. f_rtol : ﬂoat. If omitted. make as many as required to meet tolerances. default is 6e-6. Warning: This algorithm may be useful for speciﬁc problems. x_rtol : ﬂoat. iter=None. line_search=’armijo’. verbose : bool. optional 4. alphamax]. not used. If omitted. not used. **kw) Find a root of a function.14. tol_norm=None. callback=None. iter : int. verbose=False. Release 0.0rc1 scipy. optional The entries of the diagonal Jacobian are kept in the range [alpha. NoConvergence is raised.0. The Jacobian matrix is diagonal and is tuned on each iteration. If omitted. optional Number of iterations to make.10. f_rtol=None. If the step size is smaller than this. alpha=None. ‘armijo’ (default). tol_norm : function(vector) -> scalar. x_rtol=None. line_search : {None. optimization is terminated as successful. If omitted (default). as determined from the Jacobian approximation. f_tol : ﬂoat. alphamax=1. maxiter : int. Default is the maximum norm. not used.

Release 0. If the step size is smaller than this. callback : function. f_rtol : ﬂoat. f) where x is the current solution and f the corresponding residual. optional Number of iterations to make. alpha=None. x_tol=None. optional Maximum number of iterations to make. optimization is terminated as successful. Defaults to ‘armijo’. Reference . line_search=’armijo’. make as many as required to meet tolerances.0rc1 Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. x_tol : ﬂoat. optional Print status to stdout on every iteration. x0 : array-like Initial guess for the solution alpha : ﬂoat. f_tol=None. If omitted. using a scalar Jacobian approximation. callback=None. Raises NoConvergence : When a solution was not found. f_rtol=None. but whether it will work may depend strongly on the problem. not used. NoConvergence is raised. **kw) Find a root of a function. tol_norm=None. Returns sol : array-like An array (of similar array type as x0) containing the ﬁnal solution. x_rtol=None. optional The Jacobian approximation is (-1/alpha).SciPy Reference Guide.linearmixing(F. If omitted (default). not used. xin. Warning: This algorithm may be useful for speciﬁc problems. default is 6e-6. iter=None. Parameters F : function(x) -> f Function whose root to ﬁnd. verbose=False. optional Relative tolerance for the residual. If omitted. It is called on every iteration as callback(x.10. as determined from the Jacobian approximation. If omitted. If more are needed to meet convergence. should take and return an array-like object. optional Optional callback function. optional Absolute minimum step size. scipy. iter : int. f_tol : ﬂoat. verbose : bool. maxiter=None. 428 Chapter 4. optional Absolute tolerance (in max-norm) for the residual.optimize. maxiter : int.

optional Norm to use in convergence check. using diagonal Broyden Jacobian approximation. iter : int. ‘armijo’ (default). not used. xin. optional Optional callback function. The Jacobian approximation is derived from previous iterations. Defaults to ‘armijo’. f_tol=None. callback=None. f_rtol=None. x0 : array-like Initial guess for the solution alpha : ﬂoat. If omitted. alpha=None. by retaining only the diagonal of Broyden matrices. scipy. callback : function. ‘wolfe’}. Default is the maximum norm. maxiter : int.10. optional Number of iterations to make. If more are needed to meet convergence. tol_norm=None. Raises NoConvergence : When a solution was not found. line_search : {None. should take and return an array-like object. optional Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. verbose : bool. optional Print status to stdout on every iteration. x_rtol=None. Optimization and root ﬁnding (scipy.optimize. maxiter=None. x_tol=None. f_tol : ﬂoat. Returns sol : array-like An array (of similar array type as x0) containing the ﬁnal solution. optional Initial guess for the Jacobian is (-1/alpha). NoConvergence is raised. verbose=False.14. optional 4.SciPy Reference Guide. but whether it will work may depend strongly on the problem.0rc1 x_rtol : ﬂoat. Warning: This algorithm may be useful for speciﬁc problems.diagbroyden(F. iter=None.optimize) 429 . **kw) Find a root of a function. optional Maximum number of iterations to make. tol_norm : function(vector) -> scalar. It is called on every iteration as callback(x. Parameters F : function(x) -> f Function whose root to ﬁnd. Release 0. line_search=’armijo’. f) where x is the current solution and f the corresponding residual. optional Relative minimum step size. make as many as required to meet tolerances. If omitted (default).

f) where x is the current solution and f the corresponding residual. pk. optional Which type of a line search to use to determine the step size in the direction given by the Jacobian approximation. *args) Find alpha that satisﬁes strong Wolfe conditions. not used. If omitted. optional Relative tolerance for the residual. Raises NoConvergence : When a solution was not found. not used. If omitted. Defaults to ‘armijo’.]) check_grad(func. optional Norm to use in convergence check. optional Optional callback function. xk. gfk=None. . not used.90000000000000002. as determined from the Jacobian approximation. ‘armijo’ (default). x_rtol : ﬂoat. old_fval=None. grad. line_search : {None. Returns sol : array-like An array (of similar array type as x0) containing the ﬁnal solution.0rc1 Absolute tolerance (in max-norm) for the residual.optimize. x_tol : ﬂoat.. callback : function. gfk.*args) Objective function gradient (can be None). If the step size is smaller than this. Release 0. scipy. xk : ndarray Starting point. f_rtol : ﬂoat. default is 6e-6. Default is the maximum norm. pk[. It is called on every iteration as callback(x. myfprime. Additional information on the nonlinear solvers 4. optimization is terminated as successful.14. c2=0. myfprime. Check the correctness of a gradient function by comparing it against a ﬁnite-difference approximation of the gradient. optional Absolute minimum step size. If omitted. optional Relative minimum step size.SciPy Reference Guide.4 Utility Functions line_search(f. amax=50) Find alpha that satisﬁes strong Wolfe conditions. x0. myfprime : callable f’(x.10. 430 Chapter 4. old_old_fval=None. ‘wolfe’}.line_search(f.0001. Parameters f : callable f(x. Reference . args=(). c1=0. If omitted.*args) Objective function.. tol_norm : function(vector) -> scalar. xk.

c1 : ﬂoat. optional Gradient value for x=xk (xk being the current parameter estimate).SciPy Reference Guide.optimize) 431 . old_fval : ﬂoat. Parameters func: callable func(x0. gfk : ndarray. Returns alpha0 : ﬂoat Alpha for which x_new = x0 + alpha * pk. *args) Check the correctness of a gradient function by comparing it against a ﬁnite-difference approximation of the gradient.check_grad(func.optimize. Release 0. Will be recomputed if omitted. *args) : Gradient of func x0: ndarray : Points to check grad against ﬁnite difference approximation of grad using func.]. fc : int Number of function evaluations made. For the zoom phase it uses an algorithm by [.10. gc : int Number of gradient evaluations made. 1999. scipy.0rc1 pk : ndarray Search direction. args: optional : Extra arguments passed to func and grad 4. x0.. optional Parameter for curvature condition rule. See Wright and Nocedal. optional Function value for x=xk. ‘Numerical Optimization’. Notes Uses the line search algorithm to enforce strong Wolfe conditions. optional Parameter for Armijo condition rule. pg. c2 : ﬂoat.14. grad. Optimization and root ﬁnding (scipy.*args) : Function whose derivative is to be checked grad: callable grad(x0. Will be recomputed if omitted. old_old_fval : ﬂoat.. optional Additional arguments passed to objective function. optional Function value for the point preceding x=xk args : tuple. 59-60.

.15. 4. 2. the 2-norm) of the difference between grad(x0.1 Routines Large-scale nonlinear solvers: newton_krylov(F.. Find a root of a function. f_tol=1e-14) >>> x array([ 4. **kw[. **kw[. 3.. using a scalar Jacobian approximation.e. alpha. These solvers ﬁnd x for which F(x) = 0... alpha. return np.SciPy Reference Guide. M.. Find a root of a function..]) General nonlinear solvers: broyden1(F..]) anderson(F. using Broyden’s second Jacobian approximation.. **kw[..]) 432 Chapter 4. alpha. xin. 3.cos(x) + x[::-1] ..]) Simple iterations: excitingmixing(F. 3. 4] >>> import scipy.1].0rc1 Returns err: ﬂoat : The square root of the sum of squares (i. using Broyden’s ﬁrst Jacobian approximation. .10.optimize >>> x = scipy. iter.1. xin. alpha.[1.15 Nonlinear solvers This is a collection of general-purpose nonlinear multidimensional solvers.2 Examples Small problem >>> def F(x): .]) linearmixing(F. alpha..1. iter. alpha. . 2. using diagonal Broyden Jacobian approximation. using a tuned diagonal Jacobian approximation.]) broyden2(F.cos(x) + x[::-1] array([ 1. Find a root of a function. using (extended) Anderson mixing.71791677.04674914. Reference .. Release 0..]) diagbroyden(F. 1. Find a root of a function. iter. . iter. **kw[. 4. xin. [1.91158389. iter. 4. **kw[. .]) Find a root of a function. iter. 4. Both x and F can be multidimensional.optimize.broyden1(F. **kw[.15. xin. iter. xin. w0.61756251]) >>> np. xin.. **kw[. Find a root of a function. xin.. . . .. 2.. Find a root of a function. *args) and the ﬁnite difference approximation of grad using func at the points x0. using Krylov approximation for inverse Jacobian. rdiff..

2:] .2*P[-1] + P[-2])/hx/hx d2y[:.max() # visualize import matplotlib.colorbar() plt. Release 0.2*P[:. mgrid.0] = (P[:.pyplot as plt x. 0 P_top.show() 4.2*P[0] + P_left)/hx/hx d2x[-1] = (P_right . ny). abs(residual(sol)). 1] × [0. The solution can be found using the newton_krylov solver: import numpy as np from scipy.1:-1] + P[:. y. sol) plt. 75 hx.mean()**2 # solve guess = zeros((nx.2*P[:./(nx-1).1:-1] = (P[:.2*P[:. guess.10*cosh(P).-2])/hy/hy return d2x + d2y . method=’lgmres’.:-2])/hy/hy d2y[:. Nonlinear solvers 433 .15.0] + P_bottom)/hy/hy d2y[:. zeros_like./(ny-1) P_left.2*P[1:-1] + P[:-2]) / hx/hx d2x[0] = (P[1] . 1]: 1 2 1 2 P = 10 0 0 cosh(P ) dx dy with P (x.-1] + P[:. 0:1:(ny*1j)] plt. verbose=1) print ’Residual’. 1. y = mgrid[0:1:(nx*1j). 0 def residual(P): d2x = zeros_like(P) d2y = zeros_like(P) d2x[1:-1] = (P[2:] .SciPy Reference Guide. P_bottom = 1.optimize import newton_krylov from numpy import cosh.1] . float) sol = newton_krylov(residual. ny = 75. zeros # parameters nx.pcolor(x. 1) = 1 and P = 0 elsewhere on the boundary of the square.-1] = (P_top .10.0rc1 Large problem Suppose that we needed to solve the following integrodifferential equation on the square [0. hy = 1. P_right = 0.

Convolve in1 and in2 with output size determined by mode. boundary.6 0.signal) 4. mode: str {‘valid’.2 0.6 0. hrow.6 4. (Default) Convolve two N-dimensional arrays. mode=’full’) Convolve two N-dimensional arrays.10. sepﬁr2d(input. mode. same [the output is the same size as in1 centered] with respect to the ‘full’ output. boundary. See convolve.4 0. in2[.0rc1 1.4 0.0 0.2 0.. in2[. mode.16.4 0. Reference .8 1. Should have the same number of dimensions as in1.6 0. full [the output is the full discrete linear cross-correlation] of the inputs..2 0.0 0. Convolve two N-dimensional arrays using FFT. hcol) -> output 434 Chapter 4.16 Signal processing (scipy.0 0.8 0. ﬁllvalue]) correlate2d(in1. Cross-correlate two N-dimensional arrays. Parameters in1: array : ﬁrst input.4 0. in2[.0 0.2 0.SciPy Reference Guide.8 0.0 0. Cross-correlate two 2-dimensional arrays. in2[. mode]) correlate(in1. Convolve two 2-dimensional arrays. mode]) convolve2d(in1.]) sepfir2d scipy. mode]) fftconvolve(in1. ‘same’. Release 0. . in2[.signal. in2: array : second input.convolve(in1.1 Convolution convolve(in1. in2. ‘full’} : a string indicating the size of the output: valid [the output consists only of those elements that do not] rely on the zero-padding.

.signal.10.. (Default) Returns out: array : an N-dimensional array containing a subset of the discrete linear cross-correlation of in1 with in2. See convolve..convolve2d(in1.fftconvolve(in1.k.. Parameters in1.. i_l + k. Parameters in1: array : ﬁrst input. ﬁllvalue=0) Convolve two 2-dimensional arrays. .] = sum[.. in2: array : second input. scipy.] * conj(y[. mode=’full’) Convolve two N-dimensional arrays using FFT.‘same’: the output is the same size as in1 centered with respect to the ‘full’ output. mode=’full’. i_l..0rc1 Returns out: array : an N-dimensional array containing a subset of the discrete linear cross-correlation of in1 with in2..signal. Notes The correlation z of two arrays x and y of rank d is deﬁned as z[. in2.. • ‘full’: the output is the full discrete linear cross-correlation of the inputs. scipy.. mode: str.]) scipy.signal) 435 ...SciPy Reference Guide. boundary=’ﬁll’. mode: str {‘valid’. . in2. Convolve in1 and in2 with output size determined by mode and boundary conditions determined by boundary and ﬁllvalue.] x[. i_l....correlate(in1. ‘same’. mode=’full’) Cross-correlate two N-dimensional arrays. Release 0. Signal processing (scipy... Cross-correlate in1 and in2 with the output size determined by the mode argument.signal....16. optional : A string indicating the size of the output: 4.. in2 : ndarray Two-dimensional input arrays to be convolved. in2. ‘full’} : a string indicating the size of the output: • ‘valid’: the output consists only of those elements that do not rely on the zero-padding. Should have the same number of dimensions as in1...

(default) • ‘wrap’ : circular boundary conditions.correlate2d(in1. 436 Chapter 4. optional A ﬂag indicating how to handle boundaries: • ‘ﬁll’ : pad input arrays with ﬁllvalue.0rc1 valid [the output consists only of those elements that do not] rely on the zero-padding. ﬁllvalue : scalar. ﬁllvalue : scalar. mode=’full’. (Default) boundary : str.10. • ‘symm’ : symmetrical boundary conditions. optional A ﬂag indicating how to handle boundaries: • ‘ﬁll’ : pad input arrays with ﬁllvalue. Reference . optional : A string indicating the size of the output: valid [the output consists only of those elements that do not] rely on the zero-padding. same [the output is the same size as in1 centered] with respect to the ‘full’ output. mode: str. in2.SciPy Reference Guide. boundary=’ﬁll’. Cross correlate in1 and in2 with output size determined by mode and boundary conditions determined by boundary and ﬁllvalue. (Default) boundary : str. scipy. • ‘symm’ : symmetrical boundary conditions. same [the output is the same size as in1 centered] with respect to the ‘full’ output. (default) • ‘wrap’ : circular boundary conditions. Returns out : ndarray A 2-dimensional array containing a subset of the discrete linear convolution of in1 with in2. Parameters in1. Default is 0. in2 : ndarray Two-dimensional input arrays to be convolved. full [the output is the full discrete linear cross-correlation] of the inputs. full [the output is the full discrete linear cross-correlation] of the inputs. optional Value to ﬁll pad input arrays with. Release 0. optional Value to ﬁll pad input arrays with.signal. Default is 0. ﬁllvalue=0) Cross-correlate two 2-dimensional arrays.

Release 0.0 .gauss_spline(x. precision}) -> qk Smoothing spline (cubic) ﬁltering of a rank-2 array. hcol) -> output Description: Convolve the rank-2 input array with the separable ﬁlter deﬁned by the rank-1 arrays hrow. n) Gaussian approximation to B-spline basis function of order n. Signal processing (scipy. Find the quadratic spline coefﬁcients for a 1-D signal assuming mirror-symmetric boundary conditions. lamb=0. and hcol.0) Compute cubic spline coefﬁcients for rank-1 array. To obtain the signal back from the spline representation mirror-symmetric-convolve these coefﬁcients with a length 3 FIR window [1.0.signal. n) gauss_spline(x. Gaussian approximation to B-spline basis function of order n. 1.cspline1d(signal.signal. Parameters signal : ndarray A rank-1 array representing samples of a signal.qspline1d(signal.signal) 437 . Notes Uses numpy.signal.sepfir2d() sepﬁr2d(input. lmbda]) B-spline basis function of order n.0.0. optional Smoothing coefﬁcient. scipy. 6. lamb : ﬂoat. default is 0.signal.0]/ 8. n) B-spline basis function of order n.bspline(x.10. cspline2d(input {. Returns c : ndarray Cubic spline coefﬁcients. To obtain the signal back from the spline representation mirror-symmetric-convolve these coefﬁcients with a length 3 FIR window [1.0. scipy. lambda.2 B-splines bspline(x. hrow. This function can be used to ﬁnd an image given its B-spline representation. 4. Compute quadratic spline coefﬁcients for rank-1 array. precision}) -> ck qspline2d(input {.16. lambda. 4. scipy. 4. lamb]) cspline2d qspline2d spline_filter(Iin[. scipy. Compute cubic spline coefﬁcients for rank-1 array. 1.16.0) Compute quadratic spline coefﬁcients for rank-1 array. lamb]) qspline1d(signal[.0rc1 Returns out : ndarray A 2-dimensional array containing a subset of the discrete linear cross-correlation of in1 with in2. Find the cubic spline coefﬁcients for a 1-D signal assuming mirror-symmetric boundary conditions.piecewise and automatic function-generator.0 .0]/ 6.SciPy Reference Guide.signal.0. scipy. lamb=0. Mirror symmetric boundary conditions are assumed. n) cspline1d(signal[.

precision}) -> qk Description: Return the second-order B-spline coefﬁcients over a regularly spaced input grid for the twodimensional input image. The precision argument allows specifying the precision used when computing the inﬁnite sum needed to apply mirror.cspline2d() cspline2d(input {. lambda. The lambda argument speciﬁes the amount of smoothing. Filter an input data set.signal. scipy. Returns c : ndarray Cubic spline coefﬁcients. The lambda argument speciﬁes the amount of smoothing. Reference . 438 Chapter 4.10.SciPy Reference Guide. optional Smoothing coefﬁcient (must be zero for now). lambda. The precision argument allows specifying the precision used when computing the inﬁnite sum needed to apply mirror.spline_filter(Iin. Iin.0) Smoothing spline (cubic) ﬁltering of a rank-2 array.symmetric boundary conditions.symmetric boundary conditions. scipy.qspline2d() qspline2d(input {.signal. lamb : ﬂoat. Release 0. lmbda=5. scipy. using a (cubic) smoothing spline of fall-off lmbda.signal.0rc1 Parameters signal : ndarray A rank-1 array representing samples of a signal. precision}) -> ck Description: Return the third-order B-spline coefﬁcients over a regularly spacedi input grid for the twodimensional input image.

The list is sorted. kernel_size]) wiener(im[. etc. n. domain. Downsample the signal x by an integer factor q. a. q[. precision}) -> output Filter data along one-dimension with an IIR or FIR ﬁlter. rank) medfilt(volume[. Examples >>> import scipy. 1 is the next smallest element. ftype. x[. Each dimension should have an odd number of elements. Release 0. t. precision}) -> output symiirorder2(input. rank : int A non-negative integer which selects the element from the sorted list (0 corresponds to the smallest element. Median ﬁlter a 2-dimensional array.16. domain. domain : array_like A mask array with the same number of dimensions as in. x]) lfilter_zi(b.order_filter(a.signal) 439 . c0.signal. mysize. Perform an order ﬁlter on the array in. 5) >>> domain = np. zi]) lfiltic(b. kernel_size]) medfilt2d(input[. type. padtype. Perform a median ﬁlter on an N-dimensional array.3 Filtering order_filter(a. Deconvolves divisor out of signal.). r. axis. y[. divisor) hilbert(x[.signal >>> x = np. The domain argument acts as a mask centered over each pixel. Signal processing (scipy. and the output for that pixel is the element corresponding to rank in the sorted list. a.reshape(5.0rc1 4. noise]) symiirorder1 symiirorder2 lfilter(b. scipy.16. fftbins]) decimate(x. using an order n ﬁlter. window]) Perform an order ﬁlter on an N-dimensional array.10. A forward-backward ﬁlter. Nx[. axis]) get_window(window. Return a window of length Nx and type window. x[. omega {. z1 {. Returns out : ndarray The results of the order ﬁlter in an array with the same shape as in. The non-zero elements of domain are used to select elements surrounding each input pixel which are placed in a list. padlen]) deconvolve(signal. axis]) detrend(data[. Resample x to num samples using Fourier method along the given axis. symiirorder1(input. Compute the analytic signal. axis. bp]) resample(x. Remove linear trend along axis from data. rank) Perform an order ﬁlter on an N-dimensional array.SciPy Reference Guide. axis. axis. Parameters a : ndarray The N-dimensional input array. num[.arange(25). N. Perform a Wiener ﬁlter on an N-dimensional array. Construct initial conditions for lﬁlter Compute an initial state zi for the lﬁlter function that corresponds to the steady state of the step response. a. a) filtfilt(b.identity(3) 4.

. 14. [10. 11. Parameters input : array_like A 2-dimensional input array. [ 5. 0.medfilt(volume. kernel_size=None) Perform a median ﬁlter on an N-dimensional array. 440 Chapter 4. 0. 19. 16.. 0..order_filter(x. 22..signal. 0. Elements of kernel_size should be odd.]. [ 0.wiener(im. 23. [ 0. Default is a kernel of size (3.SciPy Reference Guide. mysize=None. [ 20. 9.]. scipy..signal. 7. 2. If kernel_size is a scalar. 14].]..... 3).]]) scipy... 0. Reference . Returns out : ndarray An array the same size as input containing the median ﬁltered result. 14. 6. 23. then this scalar is used as the size in each dimension..0rc1 >>> x array([[ 0. 2... 0) array([[ 0... 0. 21.. 0..].. [ 16. 24]]) >>> sp. 23. 4. Apply a median ﬁlter to the input array using a local window-size given by kernel_size. 7. 0. domain. 2) array([[ 6. 13. 6. [ 11..]..medfilt2d(input. 4]... 13. Apply a median ﬁlter to the input array using a local window-size given by kernel_size (must be odd). 0. Returns out : ndarray An array the same size as input containing the median ﬁltered result. 17. 1.signal... 12.]..]. Elements of kernel_size should be odd. 12. giving the size of the median ﬁlter window in each dimension.]]) >>> sp. kernel_size : array_like. [15.. optional A scalar or a list of length 2. kernel_size : array_like.signal. 8. kernel_size=3) Median ﬁlter a 2-dimensional array. 11. If kernel_size is a scalar. [ 0.. Release 0.signal. 12.10. 18... 24. 1. 18. 9. scipy.order_filter(x. then this scalar is used as the size in each dimension. domain. 17. optional A scalar or an N-length list giving the size of the median ﬁlter window in each dimension. 22. 19. Default size is 3 for each dimension. noise=None) Perform a Wiener ﬁlter on an N-dimensional array.. 5... 8. [ 21. [ 0. 9]. 10..]... 0.. 7. 3... [20.. 0. 19]. 21. 22. Parameters volume : array_like An N-dimensional input array. 24. 0.

signal.SciPy Reference Guide. then noise is estimated as the average of the local variance of the input. This implements the following transfer function: cs^2 H(z) = ————————————— (1 . z1 {.10. optional The noise-power to use. Returns out : ndarray Wiener ﬁltered result with the same shape as im. Inputs: input – the input signal. then this scalar is used as the size in each dimension.z1 z) The resulting signal will have mirror symmetric boundary conditions as well. The second section uses a reversed sequence. omega {. If None. optional A scalar or an N-length list giving the size of the Wiener ﬁlter window in each dimension. c0 H(z) = ——————— (1-z1/z) (1 .0rc1 Apply a Wiener ﬁlter to the N-dimensional array im.signal) 441 . If mysize is a scalar. Elements of mysize should be odd. Release 0. z1 – parameters in the transfer function.a3 z^2 ) 4. Output: output – ﬁltered signal.a2/z . Parameters im : ndarray An N-dimensional array. r.a2 z . c0. Signal processing (scipy. mysize : int or arraylike. scipy. precision – speciﬁes the precision for calculating initial conditions of the recursive ﬁlter based on mirror-symmetric input. This implements a system with the following transfer function and mirror-symmetric boundary conditions.a3/z^2) (1 .16. The second section uses a reversed sequence.symiirorder2() symiirorder2(input.symiirorder1() symiirorder1(input. precision}) -> output Description: Implement a smoothing IIR ﬁlter with mirror-symmetric boundary conditions using a cascade of ﬁrst-order sections. scipy. precision}) -> output Description: Implement a smoothing IIR ﬁlter with mirror-symmetric boundary conditions using a cascade of second-order sections.signal. noise : ﬂoat. c0.

then both a and b are normalized by a[0]. using a digital ﬁlter.r^2 cs = 1 . + b[nb]*x[n-nb] . otherwise.signal. If zi=None or is not given then initial rest is assumed.. Output: output – ﬁltered signal. Release 0. scipy. Reference . . The ﬁlter is applied to each subarray along this axis (Default = -1) zi : array_like (optional) Initial conditions for the ﬁlter delays.2 r cos omega + r^2 Inputs: input – the input signal. x. Parameters b : array_like The numerator coefﬁcient vector in a 1-D sequence. x : array_like An N-dimensional input array. this is not returned. zf holds the ﬁnal ﬁlter delay values.. The ﬁlter is a direct form II transposed implementation of the standard difference equation (see Notes). SEE signal. Notes The ﬁlter function is implemented as a direct II transposed structure. zi=None) Filter data along one-dimension with an IIR or FIR ﬁlter.0rc1 where a2 = (2 r cos omega) a3 = . x.a[na]*y[n-na] using the following difference equations: 442 Chapter 4. zf : array (optional) If zi is None.a[1]*y[n-1] . It is a vector (or array of vectors for an Ndimensional input) of length max(len(a).. This means that the ﬁlter implements a[0]*y[n] = b[0]*x[n] + b[1]*x[n-1] + . axis=-1.. Returns y : array The output of the digital ﬁlter.lfilter(b.SciPy Reference Guide. r.lﬁltic for more information. Filter a data sequence. a : array_like The denominator coefﬁcient vector in a 1-D sequence. If a[0] is not 1. a. This works for many fundamental data types (including Object type).10. axis : int The axis of the input data array along which to apply the linear ﬁlter.len(b))-1. omega – parameters in the transfer function. precision – speciﬁes the precision for calculating initial conditions of the recursive ﬁlter based on mirror-symmetric input..

.16. + a[na] z scipy. a. A has shape (m.a) and initial conditions on the output y and the input x. The output vector zi contains: zi = {z_0[-1]. Parameters b. Then... y.SciPy Reference Guide..10. B. 1) (assuming x(n) is a scalar).signal. + b[nb] z Y(z) = ---------------------------------.m] = b[1]*x[m] + z[1.N). the initial conditions are given in the vectors x and y as: x = {x[-1]. z_K-1[-1]} where K=max(M. Returns zi : 1-D ndarray The initial state for the ﬁlter. then zeros are added to achieve the proper length.m-1] ... m). . lﬁlter_zi solves: 4.m] = b[n-1]*x[m] ... a : array_like (1-D) The IIR ﬁlter coefﬁcients. The rational transfer function describing this ﬁlter in the z-transform domain is: -1 -nb b[0] + b[1]z + . Notes A linear ﬁlter with order m has a state space representation (A. return the inital conditions on the state vector zi which is used by lﬁlter to generate the output given the input.a[1]*y[m] .len(b)) is the model order. If either vector is too short. C.a[n-1]*y[m] where m is the output sample number and n=max(len(a)..lfilter for more information... 1).lfilter_zi(b.m-1] z[0.. a) Compute an initial state zi for the lﬁlter function that corresponds to the steady state of the step response... Signal processing (scipy. its inital conditions are assumed zero. m) and D has shape (1. Release 0.x[-2].y[-2].m] = b[n-2]*x[m] + z[n-2. A typical use of this function is to set the initial state so that the output of the ﬁlter starts at the same value as the ﬁrst element of the signal to be ﬁltered.X(z) -1 -na a[0] + a[1]z + .. for which the output y of the ﬁlter can be expressed as: z(n+1) = A*z(n) + B*x(n) y(n) = C*z(n) + D*x(n) where z(n) is a vector of length m.a[n-2]*y[m] z[n-2. z[n-3. x=None) Construct initial conditions for lﬁlter Given a linear ﬁlter (b. See scipy. D). z_1[-1]. C has shape (1.0rc1 y[m] = b[0]*x[m] + z[0.lfiltic(b.x[-M]} y = {y[-1]..m-1] .signal.. scipy. If M=len(b)-1 and N=len(a)-1.signal) 443 .signal. B has shape (m.y[-N]} If x is not given.

SciPy Reference Guide, Release 0.10.0rc1

zi = A*zi + B

In other words, it ﬁnds the initial condition for which the response to an input of all ones is a constant. Given the ﬁlter coefﬁcients a and b, the state space matrices for the transposed direct form II implementation of the linear ﬁlter, which is the implementation used by scipy.signal.lﬁlter, are:

A = scipy.linalg.companion(a).T B = b[1:] - a[1:]*b[0]

assuming a[0] is 1.0; if a[0] is not 1, a and b are ﬁrst divided by a[0]. Examples The following code creates a lowpass Butterworth ﬁlter. Then it applies that ﬁlter to an array whose values are all 1.0; the output is also all 1.0, as expected for a lowpass ﬁlter. If the zi argument of lﬁlter had not been given, the output would have shown the transient signal.

>>> from numpy import array, ones >>> from scipy.signal import lfilter, lfilter_zi, butter >>> b, a = butter(5, 0.25) >>> zi = lfilter_zi(b, a) >>> y, zo = lfilter(b, a, ones(10), zi=zi) >>> y array([1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])

Another example:

>>> x = array([0.5, 0.5, 0.5, 0.0, 0.0, 0.0, 0.0]) >>> y, zf = lfilter(b, a, x, zi=zi*x[0]) >>> y array([ 0.5 , 0.5 , 0.5 , 0.49836039, 0.44399389, 0.35505241])

0.48610528,

Note that the zi argument to lﬁlter was computed using lﬁlter_zi and scaled by x[0]. Then the output y has no transient until the input drops from 0.5 to 0.0. scipy.signal.get_window(window, Nx, fftbins=True) Return a window of length Nx and type window. Parameters window : string, ﬂoat, or tuple The type of window to create. See below for more details. Nx : int The number of samples in the window. fftbins : bool, optional If True, create a “periodic” window ready to use with ifftshift and be multiplied by the result of an fft (SEE ALSO fftfreq). Notes Window types: boxcar, triang, blackman, hamming, hanning, bartlett, parzen, bohman, blackmanharris, nuttall, barthann, kaiser (needs beta), gaussian (needs std), general_gaussian (needs power, width), slepian (needs width), chebwin (needs attenuation)

444

Chapter 4. Reference

SciPy Reference Guide, Release 0.10.0rc1

If the window requires no parameters, then window can be a string. If the window requires parameters, then window must be a tuple with the ﬁrst argument the string name of the window, and the next arguments the needed parameters. If window is a ﬂoating point number, it is interpreted as the beta parameter of the kaiser window. Each of the window types listed above is also the name of a function that can be called directly to create a window of that type. Examples

>>> get_window(’triang’, 7) array([ 0.25, 0.5 , 0.75, 1. , 0.75, 0.5 >>> get_window((’kaiser’, 4.0), 9) array([ 0.08848053, 0.32578323, 0.63343178, 0.89640418, 0.63343178, 0.32578323, >>> get_window(4.0, 9) array([ 0.08848053, 0.32578323, 0.63343178, 0.89640418, 0.63343178, 0.32578323, , 0.25]) ,

0.89640418, 1. 0.08848053]) 0.89640418, 1. 0.08848053])

,

4.16.4 Filter design

bilinear(b, a[, fs]) firwin(numtaps, cutoff[, width, window, ...]) firwin2(numtaps, freq, gain[, nfreqs, ...]) freqs(b, a[, worN, plot]) freqz(b[, a, worN, whole, plot]) iirdesign(wp, ws, gpass, gstop[, analog, ...]) iirfilter(N, Wn[, rp, rs, btype, analog, ...]) kaiser_atten(numtaps, width) kaiser_beta(a) kaiserord(ripple, width) remez(numtaps, bands, desired[, weight, Hz, ...]) unique_roots(p[, tol, rtype]) residue(b, a[, tol, rtype]) residuez(b, a[, tol, rtype]) invres(r, p, k[, tol, rtype]) Return a digital ﬁlter from an analog one using a bilinear transform. FIR ﬁlter design using the window method. FIR ﬁlter design using the window method. Compute frequency response of analog ﬁlter. Compute the frequency response of a digital ﬁlter. Complete IIR digital and analog ﬁlter design. IIR digital and analog ﬁlter design given order and critical points. Compute the attenuation of a Kaiser FIR ﬁlter. Compute the Kaiser parameter beta, given the attenuation a. Design a Kaiser window to limit ripple and width of transition region. Calculate the minimax optimal ﬁlter using the Remez exchange algorithm. Determine unique roots and their multiplicities from a list of roots. Compute partial-fraction expansion of b(s) / a(s). Compute partial-fraction expansion of b(z) / a(z). Compute b(s) and a(s) from partial fraction expansion: r,p,k

scipy.signal.bilinear(b, a, fs=1.0) Return a digital ﬁlter from an analog one using a bilinear transform. The bilinear transform substitutes (z-1) / (z+1) for s.

4.16. Signal processing (scipy.signal)

445

SciPy Reference Guide, Release 0.10.0rc1

**4.16.5 Matlab-style IIR ﬁlter design
**

butter(N, Wn[, btype, analog, output]) buttord(wp, ws, gpass, gstop[, analog]) cheby1(N, rp, Wn[, btype, analog, output]) cheb1ord(wp, ws, gpass, gstop[, analog]) cheby2(N, rs, Wn[, btype, analog, output]) cheb2ord(wp, ws, gpass, gstop[, analog]) ellip(N, rp, rs, Wn[, btype, analog, output]) ellipord(wp, ws, gpass, gstop[, analog]) bessel(N, Wn[, btype, analog, output]) Butterworth digital and analog ﬁlter design. Butterworth ﬁlter order selection. Chebyshev type I digital and analog ﬁlter design. Chebyshev type I ﬁlter order selection. Chebyshev type I digital and analog ﬁlter design. Chebyshev type II ﬁlter order selection. Elliptic (Cauer) digital and analog ﬁlter design. Elliptic (Cauer) ﬁlter order selection. Bessel digital and analog ﬁlter design.

scipy.signal.butter(N, Wn, btype=’low’, analog=0, output=’ba’) Butterworth digital and analog ﬁlter design. Design an Nth order lowpass digital or analog Butterworth ﬁlter and return the ﬁlter coefﬁcients in (B,A) or (Z,P,K) form. See Also: buttord. scipy.signal.buttord(wp, ws, gpass, gstop, analog=0) Butterworth ﬁlter order selection. Return the order of the lowest order digital Butterworth ﬁlter that loses no more than gpass dB in the passband and has at least gstop dB attenuation in the stopband. Parameters wp, ws : ﬂoat Passband and stopband edge frequencies, normalized from 0 to 1 (1 corresponds to pi radians / sample). For example: • Lowpass: wp = 0.2, ws = 0.3 • Highpass: wp = 0.3, ws = 0.2 • Bandpass: wp = [0.2, 0.5], ws = [0.1, 0.6] • Bandstop: wp = [0.1, 0.6], ws = [0.2, 0.5] gpass : ﬂoat The maximum loss in the passband (dB). gstop : ﬂoat The minimum attenuation in the stopband (dB). analog : int, optional Non-zero to design an analog ﬁlter (in this case wp and ws are in radians / second). Returns ord : int The lowest order for a Butterworth ﬁlter which meets specs. wn : ndarray or ﬂoat The Butterworth natural frequency (i.e. the “3dB frequency”). Should be used with butter to give ﬁlter results.

446

Chapter 4. Reference

SciPy Reference Guide, Release 0.10.0rc1

scipy.signal.cheby1(N, rp, Wn, btype=’low’, analog=0, output=’ba’) Chebyshev type I digital and analog ﬁlter design. Design an Nth order lowpass digital or analog Chebyshev type I ﬁlter and return the ﬁlter coefﬁcients in (B,A) or (Z,P,K) form. See Also: cheb1ord. scipy.signal.cheb1ord(wp, ws, gpass, gstop, analog=0) Chebyshev type I ﬁlter order selection. Return the order of the lowest order digital Chebyshev Type I ﬁlter that loses no more than gpass dB in the passband and has at least gstop dB attenuation in the stopband. Parameters wp, ws : ﬂoat Passband and stopband edge frequencies, normalized from 0 to 1 (1 corresponds to pi radians / sample). For example: • Lowpass: wp = 0.2, ws = 0.3 • Highpass: wp = 0.3, ws = 0.2 • Bandpass: wp = [0.2, 0.5], ws = [0.1, 0.6] • Bandstop: wp = [0.1, 0.6], ws = [0.2, 0.5] gpass : ﬂoat The maximum loss in the passband (dB). gstop : ﬂoat The minimum attenuation in the stopband (dB). analog : int, optional Non-zero to design an analog ﬁlter (in this case wp and ws are in radians / second). Returns ord : int The lowest order for a Chebyshev type I ﬁlter that meets specs. wn : ndarray or ﬂoat The Chebyshev natural frequency (the “3dB frequency”) for use with cheby1 to give ﬁlter results. scipy.signal.cheby2(N, rs, Wn, btype=’low’, analog=0, output=’ba’) Chebyshev type I digital and analog ﬁlter design. Design an Nth order lowpass digital or analog Chebyshev type I ﬁlter and return the ﬁlter coefﬁcients in (B,A) or (Z,P,K) form. See Also: cheb2ord. scipy.signal.cheb2ord(wp, ws, gpass, gstop, analog=0) Chebyshev type II ﬁlter order selection. Description:

4.16. Signal processing (scipy.signal)

447

SciPy Reference Guide, Release 0.10.0rc1

Return the order of the lowest order digital Chebyshev Type II ﬁlter that loses no more than gpass dB in the passband and has at least gstop dB attenuation in the stopband. Parameters wp, ws : ﬂoat Passband and stopband edge frequencies, normalized from 0 to 1 (1 corresponds to pi radians / sample). For example: • Lowpass: wp = 0.2, ws = 0.3 • Highpass: wp = 0.3, ws = 0.2 • Bandpass: wp = [0.2, 0.5], ws = [0.1, 0.6] • Bandstop: wp = [0.1, 0.6], ws = [0.2, 0.5] gpass : ﬂoat The maximum loss in the passband (dB). gstop : ﬂoat The minimum attenuation in the stopband (dB). analog : int, optional Non-zero to design an analog ﬁlter (in this case wp and ws are in radians / second). Returns ord : int The lowest order for a Chebyshev type II ﬁlter that meets specs. wn : ndarray or ﬂoat The Chebyshev natural frequency (the “3dB frequency”) for use with cheby2 to give ﬁlter results. scipy.signal.ellip(N, rp, rs, Wn, btype=’low’, analog=0, output=’ba’) Elliptic (Cauer) digital and analog ﬁlter design. Design an Nth order lowpass digital or analog elliptic ﬁlter and return the ﬁlter coefﬁcients in (B,A) or (Z,P,K) form. See Also: ellipord. scipy.signal.ellipord(wp, ws, gpass, gstop, analog=0) Elliptic (Cauer) ﬁlter order selection. Return the order of the lowest order digital elliptic ﬁlter that loses no more than gpass dB in the passband and has at least gstop dB attenuation in the stopband. Parameters wp, ws : ﬂoat Passband and stopband edge frequencies, normalized from 0 to 1 (1 corresponds to pi radians / sample). For example: • Lowpass: wp = 0.2, ws = 0.3 • Highpass: wp = 0.3, ws = 0.2 • Bandpass: wp = [0.2, 0.5], ws = [0.1, 0.6]

448

Chapter 4. Reference

SciPy Reference Guide, Release 0.10.0rc1

• Bandstop: wp = [0.1, 0.6], ws = [0.2, 0.5] gpass : ﬂoat The maximum loss in the passband (dB). gstop : ﬂoat The minimum attenuation in the stopband (dB). analog : int, optional Non-zero to design an analog ﬁlter (in this case wp and ws are in radians / second). Returns : —— : ord : int The lowest order for an Elliptic (Cauer) ﬁlter that meets specs. wn : ndarray or ﬂoat The Chebyshev natural frequency (the “3dB frequency”) for use with ellip to give ﬁlter results.scipy.signal.bessel(N, Wn, btype=’low’, analog=0, output=’ba’) Bessel digital and analog ﬁlter design. Design an Nth order lowpass digital or analog Bessel ﬁlter and return the ﬁlter coefﬁcients in (B,A) or (Z,P,K) form.

**4.16.6 Continuous-Time Linear Systems
**

lti(*args, **kwords) lsim(system, U, T[, X0, interp]) lsim2(system, **kwargs[, U, T, X0]) impulse(system[, X0, T, N]) impulse2(system, **kwargs[, X0, T, N]) step(system[, X0, T, N]) step2(system, **kwargs[, X0, T, N]) Linear Time Invariant class which simpliﬁes representation. Simulate output of a continuous-time linear system. Simulate output of a continuous-time linear system, by using Impulse response of continuous-time system. Impulse response of a single-input, continuous-time linear system. Step response of continuous-time system. Step response of continuous-time system.

class scipy.signal.lti(*args, **kwords) Linear Time Invariant class which simpliﬁes representation. Methods impulse(system[, X0, T, N]) output step(system[, X0, T, N]) Impulse response of continuous-time system. Step response of continuous-time system.

scipy.signal.impulse(system, X0=None, T=None, N=None) Impulse response of continuous-time system. Parameters system : LTI class or tuple If speciﬁed as a tuple, the system is described as (num, den), (zero, pole, gain), or (A, B, C, D). X0 : array_like, optional

4.16. Signal processing (scipy.signal)

449

SciPy Reference Guide, Release 0.10.0rc1

Initial state-vector. Defaults to zero. T : array_like, optional Time points. Computed if not given. N : int, optional The number of time points to compute (if T is not given). Returns T : ndarray A 1-D array of time points. yout : ndarray A 1-D array containing the impulse response of the system (except for singularities at zero). scipy.signal.step(system, X0=None, T=None, N=None) Step response of continuous-time system. Parameters system : an instance of the LTI class or a tuple describing the system. The following gives the number of elements in the tuple and the interpretation. 2 (num, den) 3 (zeros, poles, gain) 4 (A, B, C, D) X0 : array_like, optional Initial state-vector (default is zero). T : array_like, optional Time points (computed if not given). N : int Number of time points to compute if T is not given. Returns T : 1D ndarray Output time points. yout : 1D ndarray Step response of system. See Also: scipy.signal.step2 scipy.signal.lsim(system, U, T, X0=None, interp=1) Simulate output of a continuous-time linear system. Parameters system : an instance of the LTI class or a tuple describing the system. The following gives the number of elements in the tuple and the interpretation: • 2: (num, den) • 3: (zeros, poles, gain) • 4: (A, B, C, D)

450

Chapter 4. Reference

SciPy Reference Guide, Release 0.10.0rc1

U : array_like An input array describing the input at each time T (interpolation is assumed between given times). If there are multiple inputs, then each column of the rank-2 array represents an input. T : array_like The time steps at which the input is deﬁned and at which the output is desired. X0 : : The initial conditions on the state vector (zero by default). interp : {1, 0} Whether to use linear (1) or zero-order hold (0) interpolation. Returns T : 1D ndarray Time values for the output. yout : 1D ndarray System response. xout : ndarray Time-evolution of the state-vector. scipy.signal.lsim2(system, U=None, T=None, X0=None, **kwargs) Simulate output of a continuous-time linear system, by using the ODE solver scipy.integrate.odeint. Parameters system : an instance of the LTI class or a tuple describing the system. The following gives the number of elements in the tuple and the interpretation: • 2: (num, den) • 3: (zeros, poles, gain) • 4: (A, B, C, D) U : array_like (1D or 2D), optional An input array describing the input at each time T. Linear interpolation is used between given times. If there are multiple inputs, then each column of the rank-2 array represents an input. If U is not given, the input is assumed to be zero. T : array_like (1D or 2D), optional The time steps at which the input is deﬁned and at which the output is desired. The default is 101 evenly spaced points on the interval [0,10.0]. X0 : array_like (1D), optional The initial condition of the state vector. If X0 is not given, the initial conditions are assumed to be 0. kwargs : dict Additional keyword arguments are passed on to the function odeint. See the notes below for more details. Returns T : 1D ndarray

4.16. Signal processing (scipy.signal)

451

SciPy Reference Guide, Release 0.10.0rc1

The time values for the output. yout : ndarray The response of the system. xout : ndarray The time-evolution of the state-vector. Notes This function uses scipy.integrate.odeint to solve the system’s differential equations. Additional keyword arguments given to lsim2 are passed on to odeint. See the documentation for scipy.integrate.odeint for the full list of arguments. scipy.signal.impulse(system, X0=None, T=None, N=None) Impulse response of continuous-time system. Parameters system : LTI class or tuple If speciﬁed as a tuple, the system is described as (num, den), (zero, pole, gain), or (A, B, C, D). X0 : array_like, optional Initial state-vector. Defaults to zero. T : array_like, optional Time points. Computed if not given. N : int, optional The number of time points to compute (if T is not given). Returns T : ndarray A 1-D array of time points. yout : ndarray A 1-D array containing the impulse response of the system (except for singularities at zero). scipy.signal.impulse2(system, X0=None, T=None, N=None, **kwargs) Impulse response of a single-input, continuous-time linear system. Parameters system : an instance of the LTI class or a tuple describing the system. The following gives the number of elements in the tuple and the interpretation: 2 (num, den) 3 (zeros, poles, gain) 4 (A, B, C, D) T : 1-D array_like, optional The time steps at which the input is deﬁned and at which the output is desired. If T is not given, the function will generate a set of time samples automatically. X0 : 1-D array_like, optional The initial condition of the state vector. Default: 0 (the zero vector). N : int, optional

452

Chapter 4. Reference

SciPy Reference Guide, Release 0.10.0rc1

Number of time points to compute. Default: 100. kwargs : various types Additional keyword arguments are passed on to the function scipy.signal.lsim2, which in turn passes them on to scipy.integrate.odeint; see the latter’s documentation for information about these arguments. Returns T : ndarray The time values for the output. yout : ndarray The output response of the system. See Also: impulse, lsim2, integrate.odeint Notes The solution is generated by calling scipy.signal.lsim2, which uses the differential equation solver scipy.integrate.odeint. New in version 0.8.0. Examples Second order system with a repeated root: x’‘(t) + 2*x(t) + x(t) = u(t)

>>> >>> >>> >>> >>> import scipy.signal system = ([1.0], [1.0, 2.0, 1.0]) t, y = sp.signal.impulse2(system) import matplotlib.pyplot as plt plt.plot(t, y)

0.40 0.35 0.30 0.25 0.20 0.15 0.10 0.05 0.00

0

1

2

3

4

5

6

7

scipy.signal.step(system, X0=None, T=None, N=None) Step response of continuous-time system. Parameters system : an instance of the LTI class or a tuple describing the system.

4.16. Signal processing (scipy.signal)

453

SciPy Reference Guide, Release 0.10.0rc1

The following gives the number of elements in the tuple and the interpretation. 2 (num, den) 3 (zeros, poles, gain) 4 (A, B, C, D) X0 : array_like, optional Initial state-vector (default is zero). T : array_like, optional Time points (computed if not given). N : int Number of time points to compute if T is not given. Returns T : 1D ndarray Output time points. yout : 1D ndarray Step response of system. See Also: scipy.signal.step2 scipy.signal.step2(system, X0=None, T=None, N=None, **kwargs) Step response of continuous-time system. This function is functionally the same as scipy.signal.step, scipy.signal.lsim2 to compute the step response. Parameters system : an instance of the LTI class or a tuple describing the system. The following gives the number of elements in the tuple and the interpretation. 2 (num, den) 3 (zeros, poles, gain) 4 (A, B, C, D) X0 : array_like, optional Initial state-vector (default is zero). T : array_like, optional Time points (computed if not given). N : int Number of time points to compute if T is not given. **kwargs : : Additional keyword arguments are passed on the function scipy.signal.lsim2, which in turn passes them on to scipy.integrate.odeint. See the documentation for scipy.integrate.odeint for information about these arguments. Returns T : 1D ndarray Output time points. yout : 1D ndarray Step response of system. but it uses the function

454

Chapter 4. Reference

SciPy Reference Guide, Release 0.10.0rc1

See Also: scipy.signal.step Notes New in version 0.8.0.

**4.16.7 Discrete-Time Linear Systems
**

dlsim – simulation of output to a discrete-time linear system. dimpulse – impulse response of a discretetime LTI system. dstep – step response of a discrete-time LTI system.

4.16.8 LTI Representations

tf2zpk(b, a) zpk2tf(z, p, k) tf2ss(num, den) ss2tf(A, B, C, D[, input]) zpk2ss(z, p, k) ss2zpk(A, B, C, D[, input]) cont2discrete(sys, dt[, method, alpha]) Return zero, pole, gain (z,p,k) representation from a numerator, denominator representation of a linear ﬁlter. Return polynomial transfer function representation from zeros Transfer function to state-space representation. State-space to transfer function. Zero-pole-gain representation to state-space representation State-space representation to zero-pole-gain representation. Transform a continuous to a discrete state-space system.

scipy.signal.tf2zpk(b, a) Return zero, pole, gain (z,p,k) representation from a numerator, denominator representation of a linear ﬁlter. Parameters b : ndarray Numerator polynomial. a : ndarray Denominator polynomial. Returns z : ndarray Zeros of the transfer function. p : ndarray Poles of the transfer function. k : ﬂoat System gain. If some values of b are too close to 0, they are removed. In that case, a : BadCoefﬁcients warning is emitted. : scipy.signal.zpk2tf(z, p, k) Return polynomial transfer function representation from zeros and poles Parameters z : ndarray Zeros of the transfer function.

4.16. Signal processing (scipy.signal)

455

SciPy Reference Guide, Release 0.10.0rc1

p : ndarray Poles of the transfer function. k : ﬂoat System gain. Returns b : ndarray Numerator polynomial. a : ndarray Denominator polynomial. scipy.signal.tf2ss(num, den) Transfer function to state-space representation. Parameters num, den : array_like Sequences representing the numerator and denominator polynomials. The denominator needs to be at least as long as the numerator. Returns A, B, C, D : ndarray State space representation of the system. scipy.signal.ss2tf(A, B, C, D, input=0) State-space to transfer function. Parameters A, B, C, D : ndarray State-space representation of linear system. input : int, optional For multiple-input systems, the input to use. Returns num, den : 1D ndarray Numerator and denominator polynomials (as sequences) respectively. scipy.signal.zpk2ss(z, p, k) Zero-pole-gain representation to state-space representation Parameters z, p : sequence Zeros and poles. k : ﬂoat System gain. Returns A, B, C, D : ndarray State-space matrices. scipy.signal.ss2zpk(A, B, C, D, input=0) State-space representation to zero-pole-gain representation.

456

Chapter 4. Reference

SciPy Reference Guide, Release 0.10.0rc1

Parameters A, B, C, D : ndarray State-space representation of linear system. input : int, optional For multiple-input systems, the input to use. Returns z, p : sequence Zeros and poles. k : ﬂoat System gain. scipy.signal.cont2discrete(sys, dt, method=’zoh’, alpha=None) Transform a continuous to a discrete state-space system. Parameters sys : a tuple describing the system. The following gives the number of elements in the tuple and the interpretation: * 2: (num, den) * 3: (zeros, poles, gain) * 4: (A, B, C, D) dt : ﬂoat The discretization time step. method : {“gbt”, “bilinear”, “euler”, “backward_diff”, “zoh”} Which method to use: • gbt: generalized bilinear transformation • bilinear: Tustin’s approximation (“gbt” with alpha=0.5) • euler: Euler (or forward differencing) method (“gbt” with alpha=0) • backward_diff: Backwards differencing (“gbt” with alpha=1.0) • zoh: zero-order hold (default). alpha : ﬂoat within [0, 1] The generalized bilinear transformation weighting parameter, which should only be speciﬁed with method=”gbt”, and is ignored otherwise Returns sysd : tuple containing the discrete system Based on the input type, the output will be of the form (num, den, dt) for transfer function input (zeros, poles, gain, dt) for zeros-poles-gain input (A, B, C, D, dt) for state-space system input Notes By default, the routine uses a Zero-Order Hold (zoh) method to perform the transformation. Alternatively, a generalized bilinear transformation may be used, which includes the common Tustin’s bilinear approximation, an Euler’s method technique, or a backwards differencing technique.

The Zero-Order Hold (zoh) method is based on: http://en.wikipedia.org/wiki/Discretization#Discretization_of_linear_state_space_

4.16. Signal processing (scipy.signal)

457

SciPy Reference Guide, Release 0.10.0rc1

Generalize bilinear approximation is based on: http://techteach.no/publications/discretetime_signals_systems/discrete.pdf and G. Zhang, X. Chen, and T. Chen, Digital redesign via the generalized bilinear transformation, Int. J. Control, vol. 82, no. 4, pp. 741-754, 2009. (http://www.ece.ualberta.ca/~gfzhang/research/ZCC07_preprint.pdf)

4.16.9 Waveforms

chirp(t, f0, t1, f1[, method, phi, vertex_zero]) gausspulse(t[, fc, bw, bwr, tpr, retquad, ...]) sawtooth(t[, width]) square(t[, duty]) sweep_poly(t, poly[, phi]) Frequency-swept cosine generator. Return a gaussian modulated sinusoid: exp(-a t^2) exp(1j*2*pi*fc*t). Return a periodic sawtooth waveform. Return a periodic square-wave waveform. Frequency-swept cosine generator, with a time-dependent frequency speciﬁed as a polynomial.

scipy.signal.chirp(t, f0, t1, f1, method=’linear’, phi=0, vertex_zero=True) Frequency-swept cosine generator. In the following, ‘Hz’ should be interpreted as ‘cycles per time unit’; there is no assumption here that the time unit is one second. The important distinction is that the units of rotation are cycles, not radians. Parameters t : ndarray Times at which to evaluate the waveform. f0 : ﬂoat Frequency (in Hz) at time t=0. t1 : ﬂoat Time at which f1 is speciﬁed. f1 : ﬂoat Frequency (in Hz) of the waveform at time t1. method : {‘linear’, ‘quadratic’, ‘logarithmic’, ‘hyperbolic’}, optional Kind of frequency sweep. If not given, linear is assumed. See Notes below for more details. phi : ﬂoat, optional Phase offset, in degrees. Default is 0. vertex_zero : bool, optional This parameter is only used when method is ‘quadratic’. It determines whether the vertex of the parabola that is the graph of the frequency is at t=0 or t=t1. Returns A numpy array containing the signal evaluated at ‘t’ with the requested : time-varying frequency. More precisely, the function returns: : cos(phase + (pi/180)*phi)

458

Chapter 4. Reference

SciPy Reference Guide, Release 0.10.0rc1

where ‘phase‘ is the integral (from 0 to t) of ‘‘2*pi*f(t)‘‘. : ‘‘f(t)‘‘ is deﬁned below. : See Also: scipy.signal.waveforms.sweep_poly Notes There are four options for the method. The following formulas give the instantaneous frequency (in Hz) of the signal generated by chirp(). For convenience, the shorter names shown below may also be used. linear, lin, li: f(t) = f0 + (f1 - f0) * t / t1 quadratic, quad, q: The graph of the frequency f(t) is a parabola through (0, f0) and (t1, f1). By default, the vertex of the parabola is at (0, f0). If vertex_zero is False, then the vertex is at (t1, f1). The formula is: if vertex_zero is True: f(t) = f0 + (f1 - f0) * t**2 / t1**2 else: f(t) = f1 - (f1 - f0) * (t1 - t)**2 / t1**2 To use a more general quadratic function, or an arbitrary polynomial, use the function scipy.signal.waveforms.sweep_poly. logarithmic, log, lo: f(t) = f0 * (f1/f0)**(t/t1) f0 and f1 must be nonzero and have the same sign. This signal is also known as a geometric or exponential chirp. hyperbolic, hyp: f(t) = f0*f1*t1 / ((f0 - f1)*t + f1*t1) f1 must be positive, and f0 must be greater than f1. scipy.signal.gausspulse(t, fc=1000, bw=0.5, bwr=-6, tpr=-60, retquad=False, retenv=False) Return a gaussian modulated sinusoid: exp(-a t^2) exp(1j*2*pi*fc*t). If retquad is True, then return the real and imaginary parts (in-phase and quadrature). If retenv is True, then return the envelope (unmodulated signal). Otherwise, return the real part of the modulated sinusoid. Parameters t : ndarray, or the string ‘cutoff’ Input array. fc : int, optional Center frequency (Hz). Default is 1000. bw : ﬂoat, optional Fractional bandwidth in frequency domain of pulse (Hz). Default is 0.5. bwr: ﬂoat, optional : Reference level at which fractional bandwidth is calculated (dB). Default is -6.

4.16. Signal processing (scipy.signal)

459

SciPy Reference Guide, Release 0.10.0rc1

tpr : ﬂoat, optional If t is ‘cutoff’, then the function returns the cutoff time for when the pulse amplitude falls below tpr (in dB). Default is -60. retquad : bool, optional If True, return the quadrature (imaginary) as well as the real part of the signal. Default is False. retenv : bool, optional If True, return the envelope of the signal. Default is False. scipy.signal.sawtooth(t, width=1) Return a periodic sawtooth waveform. The sawtooth waveform has a period 2*pi, rises from -1 to 1 on the interval 0 to width*2*pi and drops from 1 to -1 on the interval width*2*pi to 2*pi. width must be in the interval [0,1]. Parameters t : array_like Time. width : ﬂoat, optional Width of the waveform. Default is 1. Returns y : ndarray Output array containing the sawtooth waveform. Examples

>>> import matplotlib.pyplot as plt >>> x = np.linspace(0, 20*np.pi, 500) >>> plt.plot(x, sp.signal.sawtooth(x))

1.0 0.5 0.0 0.5 1.0 0 10 20 30 40 50 60 70

scipy.signal.square(t, duty=0.5) Return a periodic square-wave waveform.

460

Chapter 4. Reference

SciPy Reference Guide, Release 0.10.0rc1

The square wave has a period 2*pi, has value +1 from 0 to 2*pi*duty and -1 from 2*pi*duty to 2*pi. duty must be in the interval [0,1]. Parameters t : array_like The input time array. duty : ﬂoat, optional Duty cycle. Returns y : array_like The output square wave. scipy.signal.sweep_poly(t, poly, phi=0) Frequency-swept cosine generator, with a time-dependent frequency speciﬁed as a polynomial. This function generates a sinusoidal function whose instantaneous frequency varies with time. The frequency at time t is given by the polynomial poly. Parameters t : ndarray Times at which to evaluate the waveform. poly : 1D ndarray (or array-like), or instance of numpy.poly1d The desired frequency expressed as a polynomial. If poly is a list or ndarray of length n, then the elements of poly are the coefﬁcients of the polynomial, and the instantaneous frequency is f(t) = poly[0]*t**(n-1) + poly[1]*t**(n-2) + ... + poly[n-1] If poly is an instance of numpy.poly1d, then the instantaneous frequency is f(t) = poly(t) phi : ﬂoat, optional Phase offset, in degrees. Default is 0. Returns A numpy array containing the signal evaluated at ‘t’ with the requested : time-varying frequency. More precisely, the function returns : cos(phase + (pi/180)*phi) where ‘phase‘ is the integral (from 0 to t) of ‘‘2 * pi * f(t)‘‘; : ‘‘f(t)‘‘ is deﬁned above. : See Also: scipy.signal.waveforms.chirp Notes New in version 0.8.0.

4.16. Signal processing (scipy.signal)

461

Return a window with a generalized Gaussian shape. std[. blackman. >>> get_window((’kaiser’. ﬂoat. Each of the window types listed above is also the name of a function that can be called directly to create a window of that type. 0. triang. sym]) blackmanharris(M[.get_window(window. sym]) boxcar(M[. optional If True. width). Return the M-point modiﬁed Bartlett-Hann window. slepian (needs width). or tuple The type of window to create. fftbins : bool. beta[. 1. Release 0.25. The M-point Flat top window. sym]) flattop(M[. sym]) hamming(M[. The M-point Bohman window. Dolph-Chebyshev window.signal. sym]) Return a window of length Nx and type window. sig[. hamming. . The M-point triangular window. sym]) chebwin(M. barthann. fftbins]) barthann(M[. sym]) parzen(M[. Examples >>> get_window(’triang’. parzen. Nx. width[. at[. If window is a ﬂoating point number.75. then window can be a string. sym]) gaussian(M.SciPy Reference Guide.75. Reference . Parameters window : string. kaiser (needs beta). general_gaussian (needs power. nuttall. fftbins=True) Return a window of length Nx and type window. sym]) bohman(M[. gaussian (needs std). sym]) bartlett(M[. Notes Window types: boxcar. Return the M-point slepian window. sym]) general_gaussian(M. sym]) kaiser(M. The M-point Bartlett window. Nx : int The number of samples in the window. hanning. blackmanharris. The M-point minimum 4-term Blackman-Harris window. chebwin (needs attenuation) If the window requires no parameters. Return a Gaussian window of length M with standard-deviation std. The M-point boxcar window.5 .10. bartlett. A minimum 4-term Blackman-Harris window according to Nuttall. If the window requires parameters. 7) array([ 0.25]) 462 Chapter 4. The M-point Parzen window. sym]) slepian(M. 0. scipy. The M-point Hanning window. Nx[. The M-point Hamming window. 9) 0. sym]) blackman(M[. then window must be a tuple with the ﬁrst argument the string name of the window. See below for more details. 4. create a “periodic” window ready to use with ifftshift and be multiplied by the result of an fft (SEE ALSO fftfreq). The M-point Blackman window. sym]) hann(M[. 0. bohman.0rc1 4. sym]) triang(M[. p.5 . 0.0). Return a Kaiser window of length M with shape parameter beta. sym]) nuttall(M[. it is interpreted as the beta parameter of the kaiser window.16. and the next arguments the needed parameters.10 Window functions get_window(window.

scipy.89640418.signal. scipy. scipy.nuttall(M. at. 9) array([ 0.0. sym=True) The M-point Hanning window. 0. 1. 0.SciPy Reference Guide.10. 0.32578323. 0.signal.08848053.barthann(M.63343178.signal. scipy. sym=True) Return a Kaiser window of length M with shape parameter beta. scipy.hamming(M. sym=True) Dolph-Chebyshev window. 0. .signal.blackman(M. scipy.89640418. Release 0.5*(x/sig)**(2*p)).flattop(M.kaiser(M. sig. 0. sym=True) The M-point Bohman window.bartlett(M.0rc1 array([ 0.signal.signal.signal) 463 .signal. sym=True) The M-point Hamming window. 0. Signal processing (scipy. sym=True) A minimum 4-term Blackman-Harris window according to Nuttall.boxcar(M. scipy.signal.signal.32578323.chebwin(M.blackmanharris(M. 0. sym=True) The M-point boxcar window. sym=True) The M-point Flat top window. sym=True) Return the M-point modiﬁed Bartlett-Hann window. The Gaussian shape is deﬁned as exp(-0. 0. std. beta.89640418. scipy. the half-power point is at (2*log(2)))**(1/(2*p)) * sig.signal. sym=True) The M-point Blackman window.08848053]) 0.89640418. sym=True) The M-point Bartlett window.63343178. sym : bool Generates symmetric window if True.63343178. scipy. >>> get_window(4. scipy. 4. sym=True) Return a Gaussian window of length M with standard-deviation std. scipy.08848053. 0.hann(M.signal. sym=True) The M-point minimum 4-term Blackman-Harris window.signal.08848053]) . Parameters M : int Window size. 0.32578323. 1.gaussian(M.63343178. scipy. scipy. 0.signal. sym=True) Return a window with a generalized Gaussian shape. at : ﬂoat Attenuation (in dB). scipy.16.signal.32578323.bohman(M. 0. p.general_gaussian(M.

phi.. Then inserts vectors into ﬁnal vector at the end. Release 0..10. 464 Chapter 4.signal. phi.cascade(hk. Return high-pass qmf ﬁlter from low-pass scipy. It builds a dictionary of values and slices for quick reuse.SciPy Reference Guide.signal. sym=True) Return the M-point slepian window. Notes The algorithm uses the vector cascade algorithm described by Strang and Nguyen in “Wavelets and Filter Banks”. 4.slepian(M.N * (2**J)-1 where len(hk) = len(gk) = N+1 phi : : The scaling function phi(x) at x: N phi(x) = sum hk * phi(2x-k) k=0 psi : : The wavelet function psi(x) at x: N phi(x) = sum gk * phi(2x-k) k=0 psi is only returned if gk is not None.parzen(M.triang(M. s. complete]) qmf(hk) Return (x.signal. scipy. J=7) Return (x. The coefﬁcients for the FIR low-pass ﬁlter producing Daubechies wavelets. J]) daub(p) morlet(M[. J : int. scipy. Reference .11 Wavelets cascade(hk[. psi) at dyadic points K/2**J from ﬁlter coefﬁcients. Returns x:: The dyadic points K/2**J for K=0.16. psi) at dyadic points K/2**J from ﬁlter coefﬁcients. sym=True) The M-point triangular window. Complex Morlet wavelet. width. Parameters hk : : Coefﬁcients of low-pass ﬁlter.signal. sym=True) The M-point Parzen window. optional Values will be computed at grid points K/2**J.0rc1 scipy. w.

0rc1 scipy. s=1. 4. w : ﬂoat Omega0 s : ﬂoat Scaling factor. p>=1 gives the order of the zero at f=1/2.5*(x**2)) This commonly used wavelet is often referred to simply as the Morlet wavelet. Parameters M : int Length of the wavelet.0. can have values from 1 to 34. There are 2p ﬁlter coefﬁcients.SciPy Reference Guide.exp(-0. The complete version: pi**-0.sparse) 465 .5*(x**2)) The complete version of the Morlet wavelet. Release 0. this simpliﬁed version can cause admissibility problems at low values of w. the correction term is negligible.morlet(M.5*(w**2))) * exp(-0.signal. Parameters p : int Order of the zero at f=1/2.qmf(hk) Return high-pass qmf ﬁlter from low-pass 4.signal. complete=True) Complex Morlet wavelet.signal. Notes The standard version: pi**-0. complete : bool Whether to use the complete or the standard version.daub(p) The coefﬁcients for the FIR low-pass ﬁlter producing Daubechies wavelets. The fundamental frequency of this wavelet in Hz is given by f = 2*s*w*r / M where r is the sampling rate. Sparse matrices (scipy.25 * (exp(1j*w*x) .0. with a correction term to improve admissibility.10. windowed from -s*2*pi to +s*2*pi. scipy.17. scipy.17 Sparse matrices (scipy. w=5.sparse) SciPy 2-D sparse matrix package.25 * exp(1j*w*x) * exp(-0. Note that the energy of the return wavelet is not normalised according to s. For w greater than 5. Note that.

copy]) csr_matrix(arg1[. Compressed Sparse Column matrix Compressed Sparse Row matrix Sparse matrix with DIAgonal storage Dictionary Of Keys based sparse matrix.0rc1 4. dtype.10. shape. N)]) where data and ij satisfy a[ij[0. shape. dtype. N)]) is the standard BSR representation where the block column indices for row i are stored in indices[indptr[i]:indices[i+1]] and their corresponding block values are stored in data[ indptr[i]: indptr[i+1] ]. • If no blocksize is speciﬁed. dtype. shape. Row-based linked list sparse matrix class scipy. subtraction. copy=False. [blocksize=(R. indices. k].SciPy Reference Guide. BSR is appropriate for sparse matrices with dense sub matrices like the last example below. dtype.1 Contents Sparse matrix classes bsr_matrix(arg1[. k]] = data[k] bsr_matrix((data. and matrix power. If the shape parameter is not supplied. copy]) dia_matrix(arg1[. That is. multiplication. [shape=(M. shape. division. the matrix dimensions are inferred from the index arrays.C)]) with a dense matrix or rank-2 ndarray D bsr_matrix(S. copy]) Block Sparse Row matrix A sparse matrix in COOrdinate format. dtype. Block matrices often arise in vector-valued ﬁnite element discretizations. N) dtype is optional.17. copy. Summary of BSR format: •The Block Compressed Row (BSR) format is very similar to the Compressed Sparse Row (CSR) format. shape. shape. bsr_matrix((data. copy]) lil_matrix(arg1[. dtype. [blocksize=(R. blocksize=None) Block Sparse Row matrix This can be instantiated in several ways: bsr_matrix(D.tobsr()) bsr_matrix((M. shape=(M.C)]) with another sparse matrix S (equivalent to S. R and C must satisfy the relationship M % R = 0 and N % C = 0. Reference . dtype.N).C) must evenly divide the shape of the matrix (M. [blocksize=(R. Release 0. Examples 466 Chapter 4.sparse. dtype]) to construct an empty matrix with shape (M.C). defaulting to dtype=’d’. N). [blocksize=(R. indptr). Blocksize • The blocksize (R. a simple heuristic is applied to determine an appropriate blocksize. ij[1. blocksize]) coo_matrix(arg1[. BSR is considerably more efﬁcient than CSR and CSC for many sparse arithmetic operations.bsr_matrix(arg1. shape=None. shape. ij). Notes Sparse matrices can be used in arithmetic operations: they support addition. In such cases. dtype=None. copy]) csc_matrix(arg1[.C). copy]) dok_matrix(arg1[.

2]) >>> data = array([1. 0. 0.0.1. [0.indptr).4).10. [0. [4.5.indices. 0]]. 2. [0. 0. Release 0.sparse) 467 .17.2.0. 6.todense() matrix([[0.3.todense() matrix([[1. 0.2.2. 0. 4. 3]. 1. 0. 0.2.6]) >>> indices = array([0.5. 5. 0. dtype=int8 ). 6]]) Attributes dtype shape ndim nnz blocksize has_sorted_indices data indices indptr Methods asformat asfptype astype check_format conj conjugate copy diagonal dot eliminate_zeros getH get_shape getcol getdata Data array of the matrix BSR format index array BSR format index pointer array Generic (shallow and deep) copying operations. 2. [4. [1.2]) >>> col = array([0. 0.3. shape=(6. 1. 6]. shape=(3.0rc1 >>> from scipy. 0].0.2. 2]. 0. [4. 5. 3].todense() matrix([[1. [0. Sparse matrices (scipy.2.2) >>> bsr_matrix( (data.1. 0].SciPy Reference Guide.6]).2. 5. 0. 0. 0. 4. 0.2.reshape(6. 5. 0. 3].2]) >>> data = array([1.sparse import * >>> from scipy import * >>> bsr_matrix( (3. dtype=int8) >>> row = array([0.2. [0. 0. 6]]) >>> indptr = array([0.(row. 0. Continued on next page 4. 6. 3.3.6]) >>> bsr_matrix( (data.repeat(4).2.3) ).4. 2].6) ). 3. 5.1. 0.col)).4. 2].

N)]) The arguments ‘data’ and ‘ij’ represent three arrays: 1. ij). ij[0][:] the row indices of the matrix entries 3. Reference . Release 0. [shape=(M. dtype=None. defaulting to dtype=’d’. N). data[:] the entries of the matrix. N) dtype is optional. This can be instantiated in several ways: coo_matrix(D) with a dense matrix D coo_matrix(S) with another sparse matrix S (equivalent to S.coo_matrix(arg1.10. in any order 2. copy=False) A sparse matrix in COOrdinate format. coo_matrix((data. ij[1][:] the column indices of the matrix entries 468 Chapter 4. Also known as the ‘ijv’ or ‘triplet’ format.SciPy Reference Guide.0rc1 Table 4.2 – continued from previous page getformat getmaxprint getnnz getrow matmat matvec mean multiply nonzero prune reshape set_shape setdiag sort_indices sorted_indices sum sum_duplicates toarray tobsr tocoo tocsc tocsr todense todia todok tolil transpose class scipy. shape=None.sparse.tocoo()) coo_matrix((M. [dtype]) to construct an empty matrix with shape (M.

0.7. ij[1][k] = data[k].sparse import * >>> from scipy import * >>> coo_matrix( (3. 0.2]) >>> data = array([4.5. division.9]) >>> coo_matrix( (data.0]) >>> col = array([0.1. 0]. 0. 0. duplicate (i.0rc1 Where A[ij[0][k].2.3.0]) >>> col = array([0. 0. [0.1.1. dtype=int8 ).4).col)).SciPy Reference Guide. 5]]) >>> # example with duplicates >>> row = array([0. shape=(4. [0. [0. multiplication. [0. 2. it is inferred from the index arrays Notes Sparse matrices can be used in arithmetic operations: they support addition. subtraction.1.0. 0].col)).1.sparse) 469 .4) ). 1.1.3.3. 0. 0]. [0. 0]].(row.3.todense() matrix([[4. 0. and matrix power.j) entries will be summed together.1. 0].todense() matrix([[0. Release 0. convert to CSR or CSC format for fast arithmetic and matrix vector operations • By default when converting to CSR or CSC format.4)). [0.0. 0.(row. 0. This facilitates efﬁcient construction of ﬁnite element matrices and the like. 0. shape=(4. 0.0.todense() matrix([[3. When shape is not speciﬁed.1. 0. 0].10. Advantages of the COO format • facilitates fast conversion among sparse formats • permits duplicate entries (see example) • very fast conversion to and from CSR/CSC formats Disadvantages of the COO format • does not directly support: – arithmetic operations – slicing Intended Usage • COO is a fast format for constructing sparse matrices • Once a matrix has been constructed. 0. 0]. dtype=int8) >>> row = array([0. 7.1]) >>> coo_matrix( (data. Sparse matrices (scipy.1. 0].0]) >>> data = array([1. 0.1. 4.1.17. 9. (see example) Examples >>> from scipy.

SciPy Reference Guide.sparse. 0. Release 0. 1]]) Attributes dtype shape ndim nnz data row col COO format data array of the matrix COO format row index array of the matrix COO format column index array of the matrix Methods asformat asfptype astype conj conjugate copy diagonal dot getH get_shape getcol getformat getmaxprint getnnz getrow mean multiply nonzero reshape set_shape setdiag sum toarray tobsr tocoo tocsc tocsr todense todia todok tolil transpose Generic (shallow and deep) copying operations. [0.0rc1 [0. 0.10.csc_matrix(arg1. 0. class scipy. 0]. copy=False) Compressed Sparse Column matrix This can be instantiated in several ways: 470 Chapter 4. shape=None. 0. Reference . dtype=None.

2. shape=(3.4. Release 0.5.0.todense() matrix([[1.SciPy Reference Guide. 0. 0. indices.2.2. 0.6]) >>> csc_matrix( (data. [shape=(M.3. ij).col)). shape=(3. csc_matrix((data.4. the matrix dimensions are inferred from the index arrays. 0. etc. defaulting to dtype=’d’.sparse import * >>> from scipy import * >>> csc_matrix( (3. 0]. dtype=int8 ). Sparse matrices (scipy.sparse) 471 . Advantages of the CSC format • efﬁcient arithmetic operations CSC + CSC.(row.0. ij[1. BSR may be faster) Disadvantages of the CSC format • slow row slicing operations (consider CSR) • changes to the sparsity structure are expensive (consider LIL or DOK) Examples >>> from scipy. 0. • efﬁcient column slicing • fast matrix vector products (CSR. 0.3.10.5.1.6]) >>> indices = array([0. If the shape parameter is not supplied. N).2.2.3) ). indptr). 0]]. [0. dtype=int8) >>> row = array([0. N)]) is the standard CSC representation where the row indices for column i are stored in indices[indptr[i]:indices[i+1]] and their corresponding values are stored in data[indptr[i]:indptr[i+1]]. 0.2. 3.2.4).indices. Notes Sparse matrices can be used in arithmetic operations: they support addition. N)]) where data and ij satisfy the relationship a[ij[0. 0.todense() matrix([[0. [2.tocsc()) csc_matrix((M. 0]. and matrix power.3. 4.0. N) dtype is optional. [0. 5].6]) >>> csc_matrix( (data.2]) >>> col = array([0. 4]. division.2]) >>> data = array([1.17. subtraction.2]) >>> data = array([1. k]] = data[k] csc_matrix((data. [dtype]) to construct an empty matrix with shape (M.1.2.0rc1 csc_matrix(D) with a dense matrix or rank-2 ndarray D csc_matrix(S) with another sparse matrix S (equivalent to S. k]. [shape=(M. 0.todense() matrix([[1. CSC * CSC.indptr). 6]]) >>> indptr = array([0. 4].2. [0.1.3) ). multiplication.

Release 0.0rc1 [0. Continued on next page 472 Chapter 4.SciPy Reference Guide.10. Reference . 3. 6]]) Attributes dtype shape ndim nnz has_sorted_indices data indices indptr Methods asformat asfptype astype check_format conj conjugate copy diagonal dot eliminate_zeros getH get_shape getcol getformat getmaxprint getnnz getrow mean multiply nonzero prune reshape set_shape setdiag sort_indices sorted_indices sum sum_duplicates toarray tobsr tocoo tocsc tocsr todense todia todok Data array of the matrix CSC format index array CSC format index pointer array Generic (shallow and deep) copying operations. 5]. [2. 0.

0]].sparse) 473 . dtype=int8 ).0.1. multiplication. indices. division. [shape=(M. indptr). ij). ij[1. etc.4 – continued from previous page tolil transpose class scipy. N)]) where data and ij satisfy the relationship a[ij[0.2.2. CSR * CSR. defaulting to dtype=’d’. N)]) is the standard CSR representation where the column indices for row i are stored in indices[indptr[i]:indices[i+1]] and their corresponding values are stored in data[indptr[i]:indptr[i+1]].sparse.todense() matrix([[0.5.SciPy Reference Guide.sparse import * >>> from scipy import * >>> csr_matrix( (3. [shape=(M. 0. Notes Sparse matrices can be used in arithmetic operations: they support addition. • efﬁcient row slicing • fast matrix vector products Disadvantages of the CSR format • slow column slicing operations (consider CSC) • changes to the sparsity structure are expensive (consider LIL or DOK) Examples >>> from scipy.2]) >>> col = array([0.2.0rc1 Table 4. 0].4). and matrix power. subtraction. N). k]. Advantages of the CSR format • efﬁcient arithmetic operations CSR + CSR. [0.tocsr()) csr_matrix((M. 0. the matrix dimensions are inferred from the index arrays. Sparse matrices (scipy. dtype=None.1.2]) >>> data = array([1. csr_matrix((data.6]) 4. [0. Release 0. 0. If the shape parameter is not supplied. k]] = data[k] csr_matrix((data.3. copy=False) Compressed Sparse Row matrix This can be instantiated in several ways: csr_matrix(D) with a dense matrix or rank-2 ndarray D csr_matrix(S) with another sparse matrix S (equivalent to S.2. 0. dtype=int8) >>> row = array([0.4.10. shape=None. 0.17. 0]. N) dtype is optional.csr_matrix(arg1. [dtype]) to construct an empty matrix with shape (M.0. 0.2.

col)). 2].4. [4.2.indices. Release 0. shape=(3.2.(row.SciPy Reference Guide. 0.3.3) ). 0.6]) >>> csr_matrix( (data. 3].3. 3].3) ). 0. [4.10. [0.2]) >>> data = array([1. Continued on next page 474 Chapter 4.6]) >>> indices = array([0.2. 6]]) >>> indptr = array([0.5. 6]]) Attributes dtype shape ndim nnz has_sorted_indices data indices indptr Methods asformat asfptype astype check_format conj conjugate copy diagonal dot eliminate_zeros getH get_shape getcol getformat getmaxprint getnnz getrow mean multiply nonzero prune reshape set_shape setdiag sort_indices sorted_indices sum CSR format data array of the matrix CSR format index array of the matrix CSR format index pointer array of the matrix Generic (shallow and deep) copying operations. [0.2.1. 5.0rc1 >>> csr_matrix( (data. 0.0. 2]. shape=(3. Reference .todense() matrix([[1.todense() matrix([[1. 5.indptr).

[0. Examples >>> from scipy. [0. 0.5 – continued from previous page sum_duplicates toarray tobsr tocoo tocsc tocsr todense todia todok tolil transpose class scipy. dtype=int8) >>> data = array([[1.4). [0. 4]]) Attributes dtype shape ndim nnz 4. N). 0]. division.sparse) 475 . 3.todense() matrix([[0. 2. 0]]. offsets). 3. dia_matrix((data.0rc1 Table 4. 0. shape=None.2]) >>> dia_matrix( (data. 0.17. 0.4)). 0].SciPy Reference Guide. 0. [1.dia_matrix(arg1.sparse. copy=False) Sparse matrix with DIAgonal storage This can be instantiated in several ways: dia_matrix(D) with a dense matrix dia_matrix(S) with another sparse matrix S (equivalent to S.:] stores the diagonal entries for diagonal offsets[k] (See example below) Notes Sparse matrices can be used in arithmetic operations: they support addition. 0]. 0. [dtype]) to construct an empty matrix with shape (M. shape=(M.10.-1.2.offsets). defaulting to dtype=’d’. multiplication.todia()) dia_matrix((M. N)) where the data[k.4]]). 0. 4]. 0. 2. subtraction. N). dtype=None.sparse import * >>> from scipy import * >>> dia_matrix( (3.3. shape=(4. 3.repeat(3. Release 0. and matrix power. dtype=int8). dtype is optional. [0. 0].todense() matrix([[1.axis=0) >>> offsets = array([0. Sparse matrices (scipy. 0.

shape=None. This is an efﬁcient structure for constructing sparse matrices incrementally. dtype=None.0rc1 data offsets Methods DIA format data array of the matrix DIA format offset array of the matrix asformat asfptype astype conj conjugate copy diagonal dot getH get_shape getcol getformat getmaxprint getnnz getrow mean multiply nonzero reshape set_shape setdiag sum toarray tobsr tocoo tocsc tocsr todense todia todok tolil transpose Generic (shallow and deep) copying operations. D dok_matrix(S) with a sparse matrix.N) dtype is optional. defaulting to dtype=’d’ 476 Chapter 4. This can be instantiated in several ways: dok_matrix(D) with a dense matrix. [dtype]) create the matrix with initial shape (M. copy=False) Dictionary Of Keys based sparse matrix.SciPy Reference Guide.sparse.N).10. Reference . Release 0. S dok_matrix((M.dok_matrix(arg1. class scipy.

sparse import * from scipy import * S = dok_matrix((5.0rc1 Notes Sparse matrices can be used in arithmetic operations: they support addition.j] = i+j # Update element Attributes shape ndim nnz dtype Methods asformat asfptype astype clear conj conjtransp conjugate copy diagonal dot fromkeys get getH get_shape getcol getformat getmaxprint getnnz getrow has_key items iteritems iterkeys itervalues keys mean multiply nonzero dtype Data type of the matrix Generic (shallow and deep) copying operations.sparse) 477 . division. Allows for efﬁcient O(1) access of individual elements. Can be efﬁciently converted to a coo_matrix once constructed. Duplicates are not allowed. Examples >>> >>> >>> >>> >>> >>> from scipy. and matrix power.10. dtype=float32) for i in range(5): for j in range(5): S[i.5).17. Sparse matrices (scipy. multiplication. Release 0.SciPy Reference Guide. subtraction. Continued on next page 4.

Notes Sparse matrices can be used in arithmetic operations: they support addition.tocsc()) lil_matrix((M. and matrix power.SciPy Reference Guide. multiplication. Reference .7 – continued from previous page pop popitem reshape resize set_shape setdefault setdiag split sum take toarray tobsr tocoo tocsc tocsr todense todia todok tolil transpose update values class scipy.lil_matrix(arg1. [dtype]) to construct an empty matrix with shape (M. dtype=None. copy=False) Row-based linked list sparse matrix This is an efﬁcient structure for constructing sparse matrices incrementally. defaulting to dtype=’d’. shape=None. This can be instantiated in several ways: lil_matrix(D) with a dense matrix or rank-2 ndarray D lil_matrix(S) with another sparse matrix S (equivalent to S.sparse.0rc1 Table 4. N). subtraction.10. N) dtype is optional. Advantages of the LIL format • supports ﬂexible slicing • changes to the matrix sparsity structure are efﬁcient Disadvantages of the LIL format • arithmetic operations LIL + LIL are slow (consider CSR or CSC) • slow column slicing (consider CSC) 478 Chapter 4. division. Release 0.

Sparse matrices (scipy. Release 0. each of which is a sorted list of column indices of non-zero elements.rows) of rows. Attributes shape ndim nnz dtype data rows Methods asformat asfptype astype conj conjugate copy diagonal dot getH get_shape getcol getformat getmaxprint getnnz getrow getrowview mean multiply nonzero reshape set_shape setdiag sum toarray tobsr dtype Data type of the matrix LIL format data array of the matrix LIL format row index array of the matrix Generic (shallow and deep) copying operations. convert to CSR or CSC format for fast arithmetic and matrix vector operations • consider using the COO format when constructing large matrices Data Structure • An array (self. • The corresponding nonzero values are stored in similar fashion in self.data. Continued on next page 4.0rc1 • slow matrix vector products (consider CSR or CSC) Intended Usage • LIL is a convenient format for constructing sparse matrices • once a matrix has been constructed.17.SciPy Reference Guide.sparse) 479 .10.

0. 0. dtype]) hstack(blocks[. format]) triu(A[. Release 0.n) using a given sparse format and dtype. format]) identity(n[. format=”csr”.].. k=0. e.sparse.todense() matrix([[ 1. etc.8 – continued from previous page tocoo tocsc tocsr todense todia todok tolil transpose Functions Building sparse matrices: eye(m. k. 0.. format]) kronsum(A.g. dtype]) vstack(blocks[. format]) bmat(blocks[. dtype]) rand(m. format. format=’dia’) 480 Chapter 4. scipy. Examples >>> identity(3). format.identity(n. n[.10. k. dtype. m. dtype]) eye(m.sparse. [ 0. Return the lower triangular portion of a matrix in sparse format Return the upper triangular portion of a matrix in sparse format Build a sparse matrix from sparse sub-blocks Stack sparse matrices horizontally (column wise) Stack sparse matrices vertically (row wise) Generate a sparse matrix of the given shape and density with uniformely distributed values. B[.].SciPy Reference Guide. [ 0.eye(m. diags. format=None) Identity matrix in sparse format Returns an identity matrix with shape (n. n[. format. 1. dtype. format]) kron(A. dtype=’d’. format=None) eye(m. dtype=’d’. B[. n) returns a sparse (m x n) matrix where the k-th diagonal is all ones and everything else is zeros. dtype=’int8’. n) returns a sparse (m x n) matrix where the k-th diagonal Identity matrix in sparse format kronecker product of sparse matrices A and B kronecker sum of sparse matrices A and B Return a sparse matrix from diagonals. Parameters n : integer Shape of the identity matrix.. format. format]) tril(A[..]]) >>> identity(3. Reference . 1. 0. dtype : : Data type of the matrix format : string Sparse format of the result.0rc1 Table 4. n. k. n[. scipy. density. format]) spdiags(data...

m) and (n.spdiags(data.todense() matrix([[ 0. 0.[5. [15.4]]).SciPy Reference Guide.B). 0]]) scipy. 20. 6. 8].0]])) >>> B = csr_matrix(array([[1.g. B.[3. 8].2].sparse. diags. 0. n. Parameters A: square matrix B: square matrix format : string format of the result (e. 2.sparse.17. 0.kronsum(A. 10. [ 0. Sparse matrices (scipy. 0]]) >>> kron(A. 0]. format=None) kronecker product of sparse matrices A and B Parameters A : sparse or dense matrix ﬁrst matrix of the product B : sparse or dense matrix second matrix of the product format : string format of the result (e. Release 0.kron(A. 0.0rc1 <3x3 sparse matrix of type ’<type ’numpy. [ 0.[[1. 0. 0].4]])) >>> kron(A.todense() matrix([[ 0.n) and I_m and I_n are identity matrices of shape (m.n) respectively. [ 5. 2. B.int8’>’ with 3 stored elements (1 diagonals) in DIAgonal format> scipy. 4].2].g. 0. 0.m) and B has shape (n. format=None) Return a sparse matrix from diagonals. 20. m.I_m) where A has shape (m.2]. “csr”) Returns kronecker sum in a sparse matrix format : scipy.sparse) 481 . 4. 0.[3.10. [ 5. 6. format=None) kronecker sum of sparse matrices A and B Kronecker sum of two sparse matrices is a sum of two Kronecker products kron(I_n. 4]. 10.sparse. “csr”) Returns kronecker product in a sparse matrix format : Examples >>> A = csr_matrix(array([[0.A) + kron(B. [15.

g.4].3.2. 482 Chapter 4.0rc1 Parameters data : array_like matrix diagonals stored row-wise diags : diagonals to set • k = 0 the main diagonal • k > 0 the k-th upper diagonal • k < 0 the k-th lower diagonal m.SciPy Reference Guide. 0]. diags.3. Examples >>> data = array([[1.4]. 3. [1. “csr”) By default (format=None) an appropriate sparse matrix format is returned. 3. 2.[1. k : integer The top-most diagonal of the lower triangle.10. etc. 0. 4].sparse. [0. e.g. 3. 0]. format=None) Return the lower triangular portion of a matrix in sparse format Returns the elements on or below the k-th diagonal of the matrix A.2]) >>> spdiags(data. This choice is subject to change. Release 0. Reference .-1.2.[1. 4). See Also: dia_matrix the sparse DIAgonal format.4]]) >>> diags = array([0. 0. format : string Sparse format of the result. format=”csr”. Returns L : sparse matrix Lower triangular portion of A in sparse format. [0. • k = 0 corresponds to the main diagonal • k > 0 is above the main diagonal • k < 0 is below the main diagonal Parameters A : dense or sparse matrix Matrix whose lower trianglar portion is desired. 4]]) scipy.tril(A.2. k=0. 0.3.todense() matrix([[1. 2. 4. n : int shape of the result format : format of the result (e.

[4. [4.triu(A. 0]]) >>> tril(A).int32’>’ with 4 stored elements in Compressed Sparse Column format> scipy. format=’csc’) <3x5 sparse matrix of type ’<type ’numpy. format=”csr”. • k = 0 corresponds to the main diagonal • k > 0 is above the main diagonal • k < 0 is below the main diagonal Parameters A : dense or sparse matrix Matrix whose upper trianglar portion is desired. 0. [0. 0. 6. 8. e.todense() matrix([[1.9. k : integer The bottom-most diagonal of the upper triangle. 0]. Sparse matrices (scipy. dtype=’int32’ ) >>> A. 0. 0. 0. 0.[4. [4.g. 0. k=1).0. 0. 0. 3]. Release 0.SciPy Reference Guide. 0. 0. 0. 0]]) >>> tril(A. 0. 0].10. 0].sparse. See Also: 4. [0. 5.0. [0. 0]]) >>> tril(A.3]. 0. 0]]) >>> tril(A). 2.6.8.sparse import csr_matrix >>> A = csr_matrix( [[1.todense() matrix([[1. format : string Sparse format of the result.7].0.sparse) 483 . [4.nnz 4 >>> tril(A. 8. 0. 0]. format=None) Return the upper triangular portion of a matrix in sparse format Returns the elements on or above the k-th diagonal of the matrix A. 0. k=-1). etc.17. 0.todense() matrix([[0. 5.[0. 0].2. 8. 0. 0. Returns L : sparse matrix Upper triangular portion of A in sparse format. 9. 0].5. 0. k=0. 0. 9. 0.0rc1 See Also: triu upper triangle in sparse format Examples >>> from scipy. 0. 7].todense() matrix([[1.0]]. 5.0. 0. 2. [0. 0.

6. 7]. 0. 4. 0].9.C]] ).None].todense() matrix([[1. 6. 0.int32’>’ with 8 stored elements in Compressed Sparse Column format> scipy. 0.sparse import csr_matrix >>> A = csr_matrix( [[1. [0. 0.[3.0.g. format=’csc’) <3x5 sparse matrix of type ’<type ’numpy. 9. “csr”) by default an appropriate sparse matrix format is returned.6. 5.3]. 0.[4. [4.2. 4. 0. 8. 0. [0. 9.bmat(blocks. 9. [0.0. 5].[6]]) >>> C = coo_matrix([[7]]) >>> bmat( [[A. [0. 0]. [3.sparse import coo_matrix. 7]. 6. 6]. 0]]) >>> triu(A. 2. [4. 0. 7]. 2. 5.8. dtype=’int32’ ) >>> A. 2. [0. Reference . 6.nnz 8 >>> triu(A. 3]. [3.C]] ). 7]. 0]]) >>> triu(A). 2.0]]. 3].0rc1 tril lower triangle in sparse format Examples >>> from scipy. 9. 0. k=1). 5.[None. Release 0.[None.B]. 2. 0. 0. 2.todense() matrix([[1.todense() matrix([[1. [0. 7]]) 484 Chapter 4. format=None.0. 0. [0.2].4]]) >>> B = coo_matrix([[5]. 0. k=-1). bmat >>> A = coo_matrix([[1. 0. 3]. [0. 0. 0]]) >>> triu(A).5. 0.0. 0.todense() matrix([[1. 8. 3]. 0.sparse. 8. Examples >>> from scipy. 7]]) >>> bmat( [[A. 0]]) >>> triu(A. 0.10. 0. This choice is subject to change.todense() matrix([[0.7].[0.SciPy Reference Guide.todense() matrix([[1. dtype=None) Build a sparse matrix from sparse sub-blocks Parameters blocks : grid of sparse matrices with compatible shapes an entry of None implies an all-zero matrix format : sparse format of the result (e.

format=’coo’.sparse. 6]]) scipy.todense() matrix([[1. format=None. 4]. Sparse matrices (scipy. dtype=None) Stack sparse matrices horizontally (column wise) Parameters blocks : sequence of sparse matrices with compatible shapes format : string sparse format of the result (e. 5]. [3.17.6]]) >>> vstack( [A.4]]) >>> B = coo_matrix([[5.0rc1 scipy. 2].4]]) >>> B = coo_matrix([[5]. See Also: vstack stack sparse matrices vertically (row wise) Examples >>> from scipy. This choice is subject to change.g.sparse) 485 .g.2]. See Also: hstack stack sparse matrices horizontally (column wise) Examples >>> from scipy.rand(m.[3. Release 0. [3. “csr”) by default an appropriate sparse matrix format is returned.B] ).sparse. n: int : shape of the matrix 4. n.sparse import coo_matrix. vstack >>> A = coo_matrix([[1.[6]]) >>> hstack( [A. 4.[3.01.2].10. Parameters m. format=None.B] ).sparse import coo_matrix.SciPy Reference Guide. [5. 2. 6]]) scipy. vstack >>> A = coo_matrix([[1.vstack(blocks.hstack(blocks. dtype=None) Stack sparse matrices vertically (row wise) Parameters blocks : sequence of sparse matrices with compatible shapes format : string sparse format of the result (e.todense() matrix([[1. density=0.sparse. dtype=None) Generate a sparse matrix of the given shape and density with uniformely distributed values. “csr”) by default an appropriate sparse matrix format is returned. This choice is subject to change.

isspmatrix_csr(x) scipy.isspmatrix_lil(x) scipy.isspmatrix_dok(x) scipy.isspmatrix(x) scipy.10.SciPy Reference Guide. format: str : sparse matrix format.isspmatrix_coo(x) scipy.issparse(x) scipy. 486 Chapter 4.0rc1 density: real : density of the generated matrix: density equal to one means a full matrix.sparse.sparse. Notes Only ﬂoat types are supported for now.isspmatrix_bsr(x) scipy. scipy.sparse.sparse.sparse.isspmatrix_dia(x) Graph algorithms: cs_graph_components(x) Determine connected components of a graph stored as a compressed sparse row or column matrix.sparse.sparse.sparse.isspmatrix_csc(x) scipy.sparse. density of 0 means a matrix with no non-zero items.sparse. Reference . Identifying sparse matrices: issparse(x) isspmatrix(x) isspmatrix_csc(x) isspmatrix_csr(x) isspmatrix_bsr(x) isspmatrix_lil(x) isspmatrix_dok(x) isspmatrix_coo(x) isspmatrix_dia(x) scipy.cs_graph_components(x) Determine connected components of a graph stored as a compressed sparse row or column matrix. dtype: dtype : type of the returned matrix values. Release 0.

including diagonal).0rc1 For speed reasons. Nodes having the same label belong to the same connected component. 2])) Exceptions SparseEfficiencyWarning SparseWarning exception scipy. 0. label: ndarray (ints. csc_matrix: Compressed Sparse Column format 2.SparseWarning 4.17.sparse. Examples >>> >>> >>> >>> >>> (3.17. A nonzero at index (i.sparse import cs_graph_components import numpy as np D = np. from scipy. 1.sparse) 487 . 2 dimensions.2 Usage information There are seven available sparse matrix types: 1. 1 dimension): : The label array of each connected component (-2 is used to indicate empty rows in the matrix: 0 everywhere. j) means that node i is connected to node j by an edge. Release 0. Returns n_comp: int : The number of connected components.sparse import dok_matrix cs_graph_components(dok_matrix(D)) array([0. The matrix is converted to a CSR matrix unless it is already a CSR. 0.1] = D[1. This array has the length of the number of nodes.eye(4) D[0. lil_matrix: List of Lists format 4.sparse. Only the upper triangular part is used.SciPy Reference Guide.SparseEfficiencyWarning exception scipy. one label for each node of the graph. the symmetry of the matrix x is not checked. 2])) from scipy. bsr_matrix: Block Sparse Row format 4. i. The number of rows/columns of the matrix thus corresponds to the number of nodes in the graph.0] = 1 cs_graph_components(D) array([0.e. Notes The matrix is assumed to be symmetric and the upper triangular part of the matrix is used. Parameters x: ndarray-like.10. >>> >>> (3. csr_matrix: Compressed Sparse Row format 3. 1. Sparse matrices (scipy. or sparse matrix : The adjacency matrix of the graph.

Example 1 Construct a 1000x1000 lil_matrix and add some values to it: >>> >>> >>> >>> from from from from scipy. coo_matrix: COOrdinate format (aka IJV.random import lil_matrix import spsolve solve.todense(). b) Now we can compute norm of the error with: >>> err = norm(x-x_) >>> err < 1e-10 True It should be small :) Example 2 Construct a matrix in COO format: 488 Chapter 4. ﬁrst convert the matrix to either CSC or CSR format. triplet format) 7. As illustrated below. dia_matrix: DIAgonal format To construct a matrix efﬁciently. The lil_matrix class supports basic slicing and fancy indexing with a similar syntax to NumPy arrays. Release 0.SciPy Reference Guide. so conversion to CSR is efﬁcient. dok_matrix: Dictionary of Keys format 6. b) Convert it to a dense matrix and solve.0rc1 5.setdiag(rand(1000)) Now convert it to CSR format and solve A x = b for x: >>> A = A. All conversions among the CSR.10.tocsr() >>> b = rand(1000) >>> x = spsolve(A.sparse. and check that the result is the same: >>> x_ = solve(A.linalg numpy. 1000)) A[0. the COO format may also be used to efﬁciently construct matrices. whereas conversion to CSC is less so. 100:200] = A[0. CSC. linear-time operations.sparse import scipy. The lil_matrix format is row-based. To perform manipulations such as multiplication or inversion. and COO formats are efﬁcient. norm rand >>> >>> >>> >>> A = lil_matrix((1000.linalg import numpy. Reference . use either lil_matrix (recommended) or dok_matrix. :100] A. :100] = rand(100) A[1.

sparse.0]) array([0. Further Details CSR column indices are not necessarily sorted. where A^H is the conjugate transpose of A.K).0. Other Parameters rmatvec : callable f(v) Returns A^H * v. A*v where v is a dense vector.sorted_indices() and .3. matmat : callable f(V) Returns A * V. .5..18. matvec[. >>> >>> >>> >>> I J V B = = = = array([0.linalg) 4.linalg. rmatvec.SciPy Reference Guide.(I.1.0. matvec.4)). Duplicate (i. gmres) do not need to know the individual entries of a matrix to solve a linear system A*x=b. Release 0. when passing data to other libraries).7.coo_matrix((V.shape=(4.tocsr() This is useful for constructing ﬁnite-element stiffness and mass matrices.1 Abstract linear operators LinearOperator(shape.1]) sparse.3.1.1.coo_matrix((V.0]) J = array([0.sparse. Use the .]) aslinearoperator(A) Common interface for performing matrix vector products Return A as a LinearOperator. This class serves as an abstract interface between iterative solvers and matrix-like objects.sort_indices() methods when sorted indices are required (e.0.1.2]) V = array([4.3.3.0rc1 >>> >>> >>> >>> >>> >>> from scipy import sparse from numpy import array I = array([0. cg.. Likewise for CSC row indices.9]) A = sparse.1. Parameters shape : tuple Matrix dimensions (M.linalg) 489 . 4.1. class scipy. Such solvers only require the computation of matrix vector products.shape=(4.1.J)). rmatvec=None.4)) Notice that the indices do not need to be sorted.0]) array([1. Sparse linear algebra (scipy.LinearOperator(shape.1.g.1.g.(I.2.18. matmat=None.1.j) entries are summed when converting to CSR or CSC. where V is a dense matrix with dimensions (N.N) matvec : callable f(v) Returns returns A * v. dtype=None) Common interface for performing matrix vector products Many iterative methods (e.18 Sparse linear algebra (scipy.sparse.J)).1.10. dtype : dtype 4.

Examples >>> from scipy.]) Methods matmat matvec scipy. Release 0.sparse.) as well as the (N. >>> A = LinearOperator( (2.. Reference .0rc1 Data type of the matrix.matvec( ones(2) ) array([ 2. 3*v[1]]) .[4.SciPy Reference Guide..2 Solving linear problems Direct methods for linear equation systems: 490 Chapter 4.shape and .matvec attributes See the LinearOperator documentation for additonal information.aslinearoperator(A) Return A as a LinearOperator. csr_matrix. See Also: aslinearoperator Construct LinearOperators Notes The user-deﬁned matvec() function must properly handle the case where v has shape (N. dtype=’int32’ ) >>> aslinearoperator( M ) <2x3 LinearOperator with dtype=int32> 4. matvec=mv ) >>> A <2x2 LinearOperator with unspecified dtype> >>> A.1) case.. 3.) • LinearOperator • An object with .g.linalg.5.. lil_matrix.2. The shape of the return type is handled internally by LinearOperator. etc.linalg import LinearOperator >>> from scipy import * >>> def mv(v): . Examples >>> from scipy import matrix >>> M = matrix( [[1.sparse.6]]. ‘A’ may be any of the following types: • ndarray • matrix • sparse matrix (e.]) >>> A * ones(2) array([ 2..3].2). return array([ 2*v[0].10..18. 3.

. . xtype=None.linalg..0000000000000001e-05.factorized(A) Return a fuction for solving a sparse linear system. b[.linalg) 491 . x0. b[. x1 = solve( rhs1 ) # Uses the LU factors.sparse. M. M2. b[. tol. callback=None) Use BIConjugate Gradient iteration to solve A x = b Parameters A : {sparse matrix. callback]) cgs(A. xtype. maxiter. maxiter. Use MINimum RESidual iteration to solve Ax=b Use Quasi-Minimal Residual iteration to solve A x = b maxiter=None. matrix} Starting guess for the solution. . maxiter. use_umfpack=True) Solve the sparse linear system Ax=b scipy.linalg. M. xtype.1). use_umfpack]) factorized(A) Solve the sparse linear system Ax=b Return a fuction for solving a sparse linear system. x2 = solve( rhs2 ) # Uses again the LU factors. b[. b[. permc_spec. xtype.10.. dense matrix. x0. b : {array. ..0rc1 spsolve(A.sparse. tol. maxiter. matrix} Right hand side of the linear system. maxiter. Example: solve = factorized( A ) # Makes LU decomposition.. with A pre-factorized. x0. matrix} The converged solution.SciPy Reference Guide.]) bicgstab(A. tol. b[. scipy. info : integer Provides convergence information: 0 : successful exit >0 : convergence to tolerance not achieved. x0. b[. tol. b[... x0=None. Solve a matrix equation using the LGMRES algorithm. callback]) gmres(A. . x0. maxiter. maxiter. x0. LinearOperator} The real or complex N-by-N matrix of the linear system It is required that the linear operator can produce Ax and A^T x.. b[. shift. number of iterations <0 : illegal input or breakdown Other Parameters x0 : {array. M.. permc_spec=None... xtype. Release 0. The algorithm terminates when either the relative or the absolute residual is below tol. M. tol=1. . Sparse linear algebra (scipy. tol. maxiter : integer 4.linalg. x0. with A pre-factorized. M=None.]) lgmres(A. tol.]) cg(A. restart. b. tol : ﬂoat Tolerance to achieve.sparse.]) Use BIConjugate Gradient iteration to solve A x = b Use BIConjugate Gradient STABilized iteration to solve A x = b Use Conjugate Gradient iteration to solve A x = b Use Conjugate Gradient Squared iteration to solve A x = b Use Generalized Minimal RESidual iteration to solve A x = b.bicg(A. M1. x0. maxiter. M. .) or (N. tol.]) qmr(A. Returns x : {array. tol.18. b. xtype..]) minres(A. Has shape (N.spsolve(A. scipy.sparse. Iterative methods for linear equation systems: bicg(A.

which implies that fewer iterations are needed to reach a given error tolerance. matrix} Right hand side of the linear system. dense matrix.’F’. callback : function User-supplied function to call after each iteration. LinearOperator} Preconditioner for A. M : {sparse matrix. Returns x : {array.’d’.SciPy Reference Guide. This parameter has been superceeded by LinearOperator. If A does not have a typecode method then it will compute A.’D’} This parameter is deprecated – avoid using it. Iteration will stop after maxiter steps even if the speciﬁed tolerance has not been achieved. number of iterations <0 : illegal input or breakdown Other Parameters x0 : {array. matrix} Starting guess for the solution. maxiter=None.dtype. dense matrix. M=None.10. LinearOperator} The real or complex N-by-N matrix of the linear system A must represent a hermitian.bicgstab(A. scipy. x0=None. LinearOperator} 492 Chapter 4. positive deﬁnite matrix b : {array.1).’F’. xtype=None. callback=None) Use BIConjugate Gradient STABilized iteration to solve A x = b Parameters A : {sparse matrix. tol : ﬂoat Tolerance to achieve. If None. xtype : {‘f’.sparse. where xk is the current solution vector. Effective preconditioning dramatically improves the rate of convergence. then it will be determined from A.0rc1 Maximum number of iterations. Reference .char and b. tol=1. b. Iteration will stop after maxiter steps even if the speciﬁed tolerance has not been achieved. maxiter : integer Maximum number of iterations.0000000000000001e-05. info : integer Provides convergence information: 0 : successful exit >0 : convergence to tolerance not achieved. The preconditioner should approximate the inverse of A.’d’.) or (N. M : {sparse matrix. Release 0.linalg. To save the extra computation when A does not have a typecode attribute use xtype=0 for the same type as b or use xtype=’f’. dense matrix. The type of the result.matvec(x0) to get a typecode. Has shape (N. The algorithm terminates when either the relative or the absolute residual is below tol. matrix} The converged solution. It is called as callback(xk).or ‘D’.

M=None. matrix} Right hand side of the linear system. callback : function 4. tol : ﬂoat Tolerance to achieve.) or (N. x0=None. To save the extra computation when A does not have a typecode attribute use xtype=0 for the same type as b or use xtype=’f’.linalg. dense matrix.0000000000000001e-05.sparse. which implies that fewer iterations are needed to reach a given error tolerance.0rc1 Preconditioner for A. The type of the result. maxiter=None. LinearOperator} The real or complex N-by-N matrix of the linear system A must represent a hermitian.sparse. maxiter : integer Maximum number of iterations. which implies that fewer iterations are needed to reach a given error tolerance. b.’d’.’F’. This parameter has been superceeded by LinearOperator. Release 0. If None. callback=None) Use Conjugate Gradient iteration to solve A x = b Parameters A : {sparse matrix. Sparse linear algebra (scipy.cg(A.matvec(x0) to get a typecode. Returns x : {array. If A does not have a typecode method then it will compute A. LinearOperator} Preconditioner for A.10. The preconditioner should approximate the inverse of A. dense matrix. number of iterations <0 : illegal input or breakdown Other Parameters x0 : {array.dtype. The preconditioner should approximate the inverse of A. matrix} Starting guess for the solution. positive deﬁnite matrix b : {array.SciPy Reference Guide. Effective preconditioning dramatically improves the rate of convergence. tol=1. callback : function User-supplied function to call after each iteration.1).or ‘D’. It is called as callback(xk). xtype : {‘f’.’D’} This parameter is deprecated – avoid using it. then it will be determined from A. scipy. The algorithm terminates when either the relative or the absolute residual is below tol. info : integer Provides convergence information: 0 : successful exit >0 : convergence to tolerance not achieved. where xk is the current solution vector. Iteration will stop after maxiter steps even if the speciﬁed tolerance has not been achieved. xtype=None.’F’.linalg) 493 .18. matrix} The converged solution. M : {sparse matrix.’d’.char and b. Has shape (N. Effective preconditioning dramatically improves the rate of convergence.

M : {sparse matrix.’D’} This parameter is deprecated – avoid using it. LinearOperator} Preconditioner for A. Effective preconditioning dramatically improves the rate of convergence. Returns x : {array. scipy. maxiter : integer Maximum number of iterations.char and b. then it will be determined from A.0rc1 User-supplied function to call after each iteration.cgs(A.’F’. LinearOperator} The real-valued N-by-N matrix of the linear system b : {array.) or (N.or ‘D’. If None. tol : ﬂoat Tolerance to achieve.linalg. xtype=None. M=None. To save the extra computation when A does not have a typecode attribute use xtype=0 for the same type as b or use xtype=’f’.SciPy Reference Guide. which implies that fewer iterations are needed to reach a given error tolerance. callback=None) Use Conjugate Gradient Squared iteration to solve A x = b Parameters A : {sparse matrix.’F’. x0=None.sparse.1). The algorithm terminates when either the relative or the absolute residual is below tol. dense matrix. where xk is the current solution vector. info : integer Provides convergence information: 0 : successful exit >0 : convergence to tolerance not achieved.’d’. number of iterations <0 : illegal input or breakdown Other Parameters x0 : {array. dense matrix.’d’.’D’} maxiter=None. callback : function User-supplied function to call after each iteration. This parameter has been superceeded by LinearOperator. The preconditioner should approximate the inverse of A. xtype : {‘f’. If A does not have a typecode method then it will compute A. It is called as callback(xk). The type of the result. tol=1. xtype : {‘f’. Reference . matrix} Right hand side of the linear system. It is called as callback(xk).10. where xk is the current solution vector.0000000000000001e-05.matvec(x0) to get a typecode. matrix} The converged solution. Release 0. b.’d’. 494 Chapter 4. Has shape (N. matrix} Starting guess for the solution. Iteration will stop after maxiter steps even if the speciﬁed tolerance has not been achieved.dtype.’F’.

Returns x : {array. then it will be determined from A. callback : function 4. restrt=None) Use Generalized Minimal RESidual iteration to solve A x = b. dense matrix. x0=None. M should approximate the inverse of A and be easy to solve for (see Notes). LinearOperator} The real or complex N-by-N matrix of the linear system. matrix} The converged solution. Iteration will stop after maxiter steps even if the speciﬁed tolerance has not been achieved.or ‘D’.SciPy Reference Guide.0rc1 This parameter is deprecated – avoid using it.) or (N.gmres(A. matrix} Starting guess for the solution (a vector of zeros by default).char and b. no preconditioner is used. By default. maxiter=None. but may be necessary for convergence.linalg) 495 .sparse. number of iterations • <0 : illegal input or breakdown Other Parameters x0 : {array. Has shape (N. restart : int. M : {sparse matrix. which implies that fewer iterations are needed to reach a given error tolerance. b : {array. If None.matvec(x0) to get a typecode. callback=None. LinearOperator} Inverse of the preconditioner of A. Parameters A : {sparse matrix.’F’.’d’.10. The algorithm terminates when either the relative or the absolute residual is below tol. This parameter has been superceeded by LinearOperator. restart=None. Effective preconditioning dramatically improves the rate of convergence. Release 0.sparse. M=None. tol=1. dense matrix. b. maxiter : int. Larger values increase iteration cost. scipy.0000000000000001e-05. Sparse linear algebra (scipy. matrix} Right hand side of the linear system. Default is 20.dtype.18. optional Number of iterations between restarts. If A does not have a typecode method then it will compute A. tol : ﬂoat Tolerance to achieve. info : int Provides convergence information: • 0 : successful exit • >0 : convergence to tolerance not achieved.1). The type of the result. xtype=None. optional Maximum number of iterations.linalg. To save the extra computation when A does not have a typecode attribute use xtype=0 for the same type as b or use xtype=’f’.

linalg as spla M_x = lambda x: spla. tol : ﬂoat Tolerance to achieve. which implies that fewer iterations are needed to reach a given error tolerance. Returns x : array or matrix The converged solution. x0=None. It is called as callback(rk).LinearOperator((n. tol=1. n). b : {array. x) M = spla. It is called as callback(xk). outer_v=None. where rk is the current residual vector. callback=None. x0 : {array. Parameters A : {sparse matrix. import scipy. Release 0. See Also: LinearOperator Notes A preconditioner. use the following template to produce M: # Construct a linear operator that computes P^-1 * x. M=None. maxiter=1000.0000000000000001e-05. LinearOperator} Preconditioner for A.sparse.10. dense matrix. b. Has shape (N. dense matrix.0rc1 User-supplied function to call after each iteration.linalg. The inverse should preferably not be calculated explicitly. The preconditioner should approximate the inverse of A. matrix} Right hand side of the linear system. Iteration will stop after maxiter steps even if the speciﬁed tolerance has not been achieved. where xk is the current solution vector. inner_m=30.lgmres(A. LinearOperator} The real or complex N-by-N matrix of the linear system. M : {sparse matrix. outer_k=3. matrix} Starting guess for the solution.SciPy Reference Guide. Rather. The algorithm terminates when either the relative or the absolute residual is below tol. M_x) scipy.1). is chosen such that P is close to A but easy to solve for.) or (N.spsolve(P. maxiter : integer Maximum number of iterations. P. The LGMRES algorithm [BJM] [BPh] is designed to avoid some problems in the convergence in restarted GMRES. info : integer 496 Chapter 4. Reference . Effective preconditioning dramatically improves the rate of convergence.sparse. The preconditioner parameter required by this routine is M = P^-1. and often converges in fewer iterations. store_outer_Av=True) Solve a matrix equation using the LGMRES algorithm. callback : function User-supplied function to call after each iteration.

number of iterations <0 : illegal input or breakdown Other Parameters x0 : {array.0000000000000001e-05. shift=0. such as in Newton-Krylov iteration where the Jacobian matrix often changes little in the nonlinear steps.minres(A.0rc1 Provides convergence information: 0 : successful exit >0 : convergence to tolerance not achieved. Sparse linear algebra (scipy. Unlike the Conjugate Gradient method.18. Returns x : {array.sparse. xtype=None. Release 0. This can be useful if several very similar matrices need to be inverted one after another.b) for a real symmetric matrix A. LinearOperator} 4. number of iterations <0 : illegal input or breakdown Notes The LGMRES algorithm [BJM] [BPh] is designed to avoid the slowing of convergence in restarted GMRES. The algorithm terminates when either the relative or the absolute residual is below tol.SciPy Reference Guide. Iteration will stop after maxiter steps even if the speciﬁed tolerance has not been achieved. check=False) Use MINimum RESidual iteration to solve Ax=b MINRES minimizes norm(A*x . or at least is not much worse. maxiter=None.linalg. the algorithm converges faster. b. A can be indeﬁnite or singular. Has shape (N. x0=None. matrix} Starting guess for the solution. tol : ﬂoat Tolerance to achieve.10. M : {sparse matrix.shift*I)x = b Parameters A : {sparse matrix. show=False. Typically. tol=1.sparse. References [BJM]. due to alternating residual vectors.) or (N.1). LinearOperator} The real symmetric N-by-N matrix of the linear system b : {array.0. info : integer Provides convergence information: 0 : successful exit >0 : convergence to tolerance not achieved. If the solution lies close to the span of these vectors. matrix} Right hand side of the linear system. maxiter : integer Maximum number of iterations. it often outperforms GMRES(m) of comparable memory requirements by some measure. [BPh] scipy. If shift != 0 then the method solves (A .linalg) 497 . Another advantage in this algorithm is that you can supply it with ‘guess’ vectors in the outer_v argument that augment the Krylov subspace. dense matrix. dense matrix. callback=None. matrix} The converged solution. M=None.

M1=None. pp.’F’. Returns x : {array. Release 0.’F’.10. where xk is the current solution vector. xtype=None.’d’.’d’. C. matrix} Starting guess for the solution.html This ﬁle is a translation of the following MATLAB implementation: http://www. Effective preconditioning dramatically improves the rate of convergence. To save the extra computation when A does not have a typecode attribute use xtype=0 for the same type as b or use xtype=’f’. 498 Chapter 4.SciPy Reference Guide. callback : function User-supplied function to call after each iteration. matrix} Right hand side of the linear system. Notes THIS FUNCTION IS EXPERIMENTAL AND SUBJECT TO CHANGE! References Solution of sparse indeﬁnite systems of linear equations.edu/group/SOL/software/minres. Saunders (1975). tol=1. C.matvec(x0) to get a typecode.0000000000000001e-05. A. tol : ﬂoat maxiter=None.linalg. which implies that fewer iterations are needed to reach a given error tolerance. info : integer Provides convergence information: 0 : successful exit >0 : convergence to tolerance not achieved. This parameter has been superceeded by LinearOperator. The type of the result.stanford.char and b.sparse. Has shape (N. 617-629. Paige and M. It is required that the linear operator can produce Ax and A^T x. The preconditioner should approximate the inverse of A.qmr(A. If None.or ‘D’. then it will be determined from A. xtype : {‘f’. b : {array.’D’} This parameter is deprecated – avoid using it.stanford.edu/group/SOL/software/minres/matlab/ scipy.) or (N.dtype.1). M2=None. If A does not have a typecode method then it will compute A. matrix} The converged solution. callback=None) Use Quasi-Minimal Residual iteration to solve A x = b Parameters A : {sparse matrix. Anal. b. x0=None. Reference . Numer.0rc1 Preconditioner for A. number of iterations <0 : illegal input or breakdown Other Parameters x0 : {array. SIAM J. It is called as callback(xk). LinearOperator} The real-valued N-by-N matrix of the linear system. http://www. dense matrix. 12(4).

SciPy Reference Guide, Release 0.10.0rc1

Tolerance to achieve. The algorithm terminates when either the relative or the absolute residual is below tol. maxiter : integer Maximum number of iterations. Iteration will stop after maxiter steps even if the speciﬁed tolerance has not been achieved. M1 : {sparse matrix, dense matrix, LinearOperator} Left preconditioner f