Posts tagged ‘python’

Signed zeroes and complex literals

IEEE-754 floats have the concept of a “signed zero”; 0.0 has a different bit representation to -0.0. In most cases, -0.0 behaves the same way as 0.0, and it compares equal in arithmetic operations. It becomes more obviously distinct in floating-point operations that involve some form of limiting behaviour. For example, x / 0.0 and x / -0.0 are opposite-signed infinities.

Along with the other IEEE-754 special values, like quiet/signalling NaN and infinities, these sorts of behaviours prevent compilers from making certain mathematical rewrites that would appear to be completely sound in regular arithmetic. (ab)-(a-b) in regular arithmetic over the reals can be written as bab-a, and despite floats having a symmetrical range of positive and negative values (unlike two’s complement integers), this is an invalid transformation in IEEE-754 arithmetic regardless of whether a symmetric rounding mode is in effect. In all rounding modes, the problem occurs at a = b; all finite IEEE-754 floats satisfy x - x = 0.0, therefore a - b and b - a are both 0.0, and negating one to make -0.0 makes it distinct.

In most uses, this is largely a curiosity and has little impact. It can be useful in general when the only information needed from the result of a long calculation is its sign. If the result were to underflow, beyond even the subnormal floats, the resulting sign of the zero would still be able to indicate the correct direction. The signed zero, then, can be thought of as representing the behaviour in the limit.

Where it becomes far more than a curiosity, with major impactful results, is when the signed zero becomes involved in a calculation with a discontinuity at zero. For real numbers, the most obvious of these is atan2(y, x), which is the floating-point version of arctan(y/x)\arctan(y/x) including the quadrant of the rotation. Since the float -0.0 is then interpreted as limx0x\lim_{x\to0^-} x—the limit as xx approaches zero by becoming less negative—there is a natural distinction between atan2(0.0, 0.0) and atan2(0.0, -0.0). Treating these as being calculated within limy0+\lim_{y\to0^+}, it becomes sensible that atan2(0.0, 0.0) would be a zero rotation, while atan2(0.0, -0.0) approximates π\pi.

Unusual effects of Python scoping

Python scoping can be deceptive, especially since it can appear fairly simple at first. Generally, name resolution starts from an inner-most scope, and proceeds outwards until some outer scope either contains the name, or we run out of scopes to search. This procedure is familiar to users of many programming languages. Newcomers from other languages are often surprised that various control-flow constructs like for and with don’t open new scopes, so new variables defined with them are still accessible after the block has completed, but this is relatively simple to accept; Python often seems very permissive. This post presents a few corners of Python’s scoping rules that are more unusual than this, some of which continue to trip me up.

Mutating Python tuples by C FFI abuse

Tuples are as immutable as things get in Python. They have special treatment being a core builtin, so unlike user-defined classes, there’s no private attribute that you can mutate if you really want to mess around. However, Python is an interpreted language with no baked-in access control, and (almost) everything really is mutable if you try hard enough.

All Python objects at the C level can be addressed by a pointer of type PyObject *, which (for most regular CPython builds) has the elements

struct _object {
    Py_ssize_t ob_refcnt;
    PyTypeObject *ob_type;
}
These objects do what they say on the tin. The different types of C object then have further elements after these, to specialise them.

Tuples are a type of “variable-length” object, in the sense that the same C struct definition is used for all sizes. With the head components inlined, this looks like

struct _tuple {
    Py_ssize_t ob_refcnt;
    PyTypeObject *ob_type;
    Py_ssize_t ob_size;
    PyObject *ob_item[1];
}
The first two arguments are the reference count and the Python type object, as before, then the next is the number of elements in the tuple, and the last one is an array of (pointers to) Python objects. The array is declared as size one, but Python guarantees that enough space will actually have been allocated to hold ob_size elements. The items in the tuple are stored in this array.

From Python space, tuple.__setitem__ raises a TypeError, but in principle at the C level, there is no such immutability. As of Python 3.11, the tupleobject.h header file even has a comment explaining how tuple can be used as a temporary array, and there is a PyTuple_SetItem function implementation, although this has slightly different reference-counting semantics to normal.

We can use the Python C FFI ctypes, and the CPython implementation detail that id returns the memory address of the Python object to cheekily define our own Python-space tuple_setitem. The PyObject type in ctypes doesn’t expose the internal fields of the object, but instead we can change the items we want by casting the integer output of id into an size_t (assuming, as in many systems that pointers have the same size as size_t), and manually assigning into the ob_item array. To be better citizens of Python—rampant pointer abuse aside—we should also increment the reference count of the object we are inserting into the tuple, and decrement the count for the object we are removing. This leaves us with the Python-space function

from ctypes import cast, c_size_t, c_ssize_t, POINTER
from typing import Any

def tuple_setitem(tup: tuple, index: int, new: Any):
    if -len(tup) <= index < 0:
        index = len(tup) + index
    if not 0 <= index < len(tup):
        raise IndexError("tuple index out of range")

    # Decrement the old item's refcount.
    old = tup[index]
    cast(id(old), POINTER(c_ssize_t))[0] -= 1

    # Increment the new item's refcount.
    cast(id(new), POINTER(c_ssize_t))[0] += 1

    # Replace the item.
    cast(id(tup), POINTER(c_size_t))[3 + index] = id(new)
Now let’s try it out:
>>> tup = (1, 2, 3)
>>> tuple_setitem(tup, 0, 7)
>>> tup
(7, 2, 3)

Generalised Fourier Series, Part 3: Comparing Series Expansions

In the second part of this series on series expansions (introduction), we found a way to make a series expansion using any basis of orthogonal polynomials. This is how the standard Fourier series is defined, and then we also used it to make a series expansion out of the Legendre polynomials. Now we’re going to compare how these behave with different numbers of terms.

This is finally the fun part of the seminar! My second-year physics students should generally have already known the previous two articles from first-year courses, so they started here, pretty much. To interact with these series expansions, you should load up my Jupyter notebook on either Colab or myBinder, and try plotting your own functions and seeing how the expansions behave.

In reality we don’t use the infinite form of series expansions, we only use the first few terms to get a good approximation of a function. The notebook is used for investigating how different series expansions work with comparably few terms, and which would be most appropriate in a given situation. The series there are the standard Fourier series, the Fourier–Legendre series, and the Taylor series about x=0x=0 (which is formed by a different method to the other two). If you need a reminder of how these series look like, the Taylor and Fourier series forms were given in the introduction, and the Legendre series was derived in part two.

I asked the students to consider a couple of the following questions, and discuss them in groups of around five:

  • Which series would you call the “most accurate”? Why? Does it depend on the function?
  • What sorts of functions are the different expansions best at approximating? Which are they bad at?
  • The Legendre series often seems to “give up” in the middle of some shapes at low orders (e.g. sinusoids). Why is this, and are there any things the series is still useful for?
  • The Taylor series almost invariably has the largest pointwise error. Why is this? What is the Taylor series useful for?

Generalised Fourier Series

In all my time as a PhD student I have been involved with teaching undergraduate and postgraduate physics students at Imperial College London (they even gave me a prize in 2018!) as a teaching assistant in classroom-style tutorials and seminars. For my final year, though, I’ve also moved up into helping write the teaching materials, particularly for the differential equations part of the second-year course.

The first differential equations seminar we had was showing off some of the properties and uses of the Legendre polynomials, which the students had just met by solving Legendre’s equation. My part of the seminar was illustrating how any orthogonal basis of functions can be used to make a series expansion, and then getting the students to investigate how different types of functional expansion behave at different orders. If you just want to play with this, load up the Jupyter notebook on Colab or myBinder. The source code for this notebook is available on GitHub. You can see an example plot of these below.

Series approximations of a twelfth-order polynomial using many terms in the expansion.

The end result we’ll achieve mathematically is an abstract way of making series expansions. In general, a series expansion approximates a function f(x)f(x) by using a (possibly infinite) sequence of terms, where each term is a constant multiplied by some basis function. Different series expansions use different bases and different methods of determining the coefficients.

Perhaps the most familiar example is the Taylor series. The Taylor series Tf(x)T_f(x) that approximates a function f(x)f(x) around some point x0x_0 is

QuTiP data layer and the end of Google Summer of Code 2020

This post is the final permalink for the work I’ve done for QuTiP in the Google Summer of Code 2020. My mentors have been Alex Pitchford, Eric Giguère and Nathan Shammah, and QuTiP is under the numFOCUS umbrella.

The main aim of the project was to make Qobj, the primary data type in QuTiP, able to use both sparse and dense representations of matrices and have them interoperate seamlessly. This was a huge undertaking that had far-reaching implications all across the library, but we have now succeeded. There is still plenty of work to be done in additional development documentation and on sanding out the edges to improve the UX, but we are moving towards a public beta of a major version update next year.

Overview of the new QuTiP data layer

This post is partial documentation for the implementation of the data-layer that I wrote in the last week or so as part of Google Summer of Code with QuTiP. I may return to talk a bit more about how various algorithms are achieved internally, but for now, this is some indication of what I’ve been working on.

I had previously replaced the old fast_csr_matrix type with the new, custom CSR type as the data backing for Qobj, and all internal QuTiP data representations. This produced some speed-ups in some places due to improved algorithms and better cache usage in places, but its principle advantage was the massive reduction in overhead for function calls between Python and C space, which largely affected small objects.

The full aim, however, is to have QuTiP 5 support many different data representations as the backing of Qobj, and use the most suitable representation for the given data. This will not require every single QuTiP function to have an exponential number of versions for every possible combination of inputs, but only to have specialisations for the most common data combinations. This concept is the “data layer”.

All code examples in this post are prefixed with

>>> from qutip.core import data

Contiguous memory and finding segfaults in Cython

One of the main advantages of using Cython in the algebraic core of scientific code is fast, non-bounds-checked access to contiguous memory. Of course, when you elide safety-checking, it’s not surprising when you start to get segfaults due to out-of-bounds access and use-after-free bugs in your Cython code.

This post talks a little about why we chose to use raw pointers to access our memory in the upcoming major version of QuTiP rather than other possible methods, and how I tracked down a couple of memory-safety bugs in my code which would have been caught had we been using safer alternatives.

Design draft: improving tensor-product dimensions in QuTiP

Qobj instantiation and mathematical operations have a large overhead, mostly because of handling the dims parameter in tensor-product spaces. I’m proposing one possible way to speed this up, while also gaining some additional safety and knowledge about mathematical operations on tensor-product spaces.

The steps:

  1. rigourously define the “grammar” of dims, and allow all of dimensions.py to assume that this grammar is followed to speed up parsing
  2. maintain a private data structure type dimensions._Parsed inside Qobj which is constructed once, and keeps all details of the parsing so they need not be repeated. Determine Qobj.type from this data structure
  3. maintain knowledge of the individual type of every subspace in the full Hilbert space (e.g. with a list). There is still a “global” Qobj.type, but this can now be one in the set {'bra', 'ket', 'oper', 'scalar', 'super', 'other'}. 'other' is for when the individual elements do not all match each other. Individual elements cannot be 'other'. 'scalar' is added to operations can keep track of tensor elements which have been contracted, say by a bra-ket product—operations will then broadcast scalar up to the correct dimensions on certain operations.
  4. dimension parsing is now sped up by using the operation-specific type knowledge. For example, bra + bra -> bra, and ket.dag() -> bra. Step 3 is necessary to allow matrix multiplication to work. These lookups could be done with enum values instead of string hashing.

Note: this is part of a design discussion for the next major release of QuTiP. I originally wrote this on 2020-07-13, and any further discussion may be found at the corresponding GitHub issue.

Sphinx and the QuTiP Developers' Guide

Overhauling the internals of a mathematical library is no good if no other developers on the team know how to use the new systems you’ve put in place, and don’t know why you’ve made the choices you’ve made. In the last week I’ve been writing a new QuTiP developers’ guide to the new data layer that I’m creating as part of my Google Summer of Code project, which has involved learning a lot more about the Sphinx documentation tool, and a little bit of GitHub esoterica.

Currently we don’t have a complete plan for how this guide will be merged into the QuTiP documentation, and where exactly it will go, so for now it is hosted on my own GitHub repository. I have also put up a properly rendered version on a GitHub pages site linked to the repository.

My Sphinx conf.py file for this repository is not (at the time of commit 0edf49e) very exciting. Fortunately, Sphinx largely just works out-of-the-box as one would expect from a mature Python project. Perhaps the boldest part of that file is the intersphinx_mapping dictionary, which uses the intersphinx built-in to link to other projects’ documentation also built with Sphinx.

Right now, the intersphinx documentation is perhaps a little lacking, and sometimes seems to just involve some hope (and some disappointment). In particular, I have several external references set up as

intersphinx_mapping = {
    'qutip': ('http://qutip.org/docs/latest/', None),
    'python': ('https://docs.python.org/3', None),
    'numpy': ('https://numpy.org/doc/stable/', None),
    'scipy': ('https://docs.scipy.org/doc/scipy/reference/', None),
    'cython': ('https://cython.readthedocs.io/en/latest/', None),
}
older posts…