Introduction¶
Spectral Galerkin¶
The spectral Galerkin method solves partial differential equations through a special form of the method of weighted residuals (WRM). As a Galerkin method it is very similar to the finite element method (FEM). The most distinguishable feature is that it uses global shape functions, where FEM uses local. This feature leads to highly accurate results with very few shape functions, but the downside is much less flexibility when it comes to computational domain than FEM.
Consider the Poisson equation with a right hand side function \(f(\boldsymbol{x})\)
To solve this equation, we will eventually need to supplement appropriate boundary conditions. However, for now just assume that any valid boundary conditions (Dirichlet, Neumann, periodic) apply.
With the method of weighted residuals we attempt to find \(u(\boldsymbol{x})\) using an approximation, \(u_N\), to the solution
Here the \(N\) expansion coefficients \(\{\hat{u}_k\}\) are unknown and \(\{\phi_k\}\) are a set of basis (or trial) functions. The basis functions form a basis for the discrete (finite-dimensional) function space
and by (2) we express that \(u \approx u_N \in V_N\).
Inserting for \(u_N\) in Eq. (1) we get a residual
With the WRM we now force this residual to zero in an average sense using test function \(v(\boldsymbol{x})\) and weight function \(w(\boldsymbol{x})\)
where \(\overline{v}\) is the complex conjugate of \(v\). If we now choose the test functions from the same space as the trial functions, i.e., \(V_N\), then the WRM becomes the Galerkin method, and we get \(N\) equations for \(N\) unknowns \(\{\hat{u}_j\}\)
Note that this is a regular system of linear algebra equations to be solved for the column vector \(\boldsymbol{\hat{u}} = (\hat{u}_0, \hat{u}_1, \ldots, \hat{u}_{N-1})^T \in \mathbb{R}^N\)
where \(A=(a_{kj}) \in \mathbb{R}^{N \times N}\) is a matrix, \(\tilde{f}_k = \left( f, \phi_k \right)_w\) and \(\boldsymbol{\tilde{f}} = (\tilde{f}_0, \tilde{f}_1, \ldots, \tilde{f}_{N-1})^T\) \(\in \mathbb{R}^N\) is a column vector.
The choice of basis functions, or function space \(V_N\), is highly central to the method. For the Galerkin method to be spectral, the basis is usually chosen as linear combinations of Chebyshev, Legendre, Laguerre, Hermite, Jacobi or trigonometric functions. In one spatial dimension typical choices for \(\phi_k\) are
where \(T_k, L_k\) are the \(k\)’th Chebyshev polynomial of the first kind and the \(k\)’th Legendre polynomial, respectively. Note that the second and fourth functions above satisfy the homogeneous Dirichlet boundary conditions \(\phi_k(\pm 1) = 0\), and as such these basis functions may be used to solve the Poisson equation (1) with homogeneous Dirichlet boundary conditions. Similarly, two basis functions that satisfy homogeneous Neumann boundary condition \(u'(\pm 1)=0\) are
Shenfun contains classes for working with several such bases, to be used for different equations and boundary conditions. More precisely, for a problem at hand the user chooses a function space, \(V_N\). Associated with the function space is a domain (e.g., \([-1, 1]\)), and a weighted inner product. The weights \(w(x)\) are chosen under the hood, and specifically for each basis. For example, Chebyshev functions use the weight \(1/\sqrt{1-x^2}\), whereas Legendre and Fourier functions use a constant weight.
Tensor products¶
If the physical problem at hand is two-dimensional, then we need to approximate two-dimensional functions like \(u(x, y)\). To this end we need to use two one-dimensional function spaces and then combine them through tensor products in order to get a two-dimensional (tensor product) space. For example, if we choose the function spaces \(X_N\) and \(Y_M\), for the first and second dimension, respectively, then we can create a 2D tensor product space \(W_P\) as
where the dimension \(P=N \cdot M\) and \(\otimes\) represents a tensor product. See, e.g., this tensor product blog for a simple explanation of the tensor product.
Two generic bases for \(X_N\) and \(Y_M\) will be
where the index set \(\mathcal{I}^N=\{0, 1, \ldots, N-1\}\) and \(\mathcal{X}_j(x)\) and \(\mathcal{Y}_k(y)\) are some appropriately chosen basis functions. Note that we are here using the \(y\)-coordinate for the \(Y_M\) basis, because this basis is used along the second axis of the tensor product space \(W_P\).
We may now also define the tensor product space \(W_P\) as
where \(\times\) represents a Cartesian product. A basis function for \(W_P\) may as such be written as
and if we now want to approximate \(u(x, y) \approx u_N(x, y) \in W_P\) this means that we intend to find
where the \(P=N \cdot M\) expansion coefficients \(\{\hat{u}_{ij}\}\) are the unknowns.
As an example, assume now that we have a 2D Cartesian domain \(\Omega = [-1, 1] \times [0, 2 \pi]\), with homogeneous Dirichlet boundary conditions at \(x=\pm 1\) and a periodic solution in the \(y\)-direction. We may now choose the basis functions \(\mathcal{X}_j(x) = T_j(x)-T_{j+2}(x)\) and \(\mathcal{Y}_k(y) = \exp(\imath k y)\) such that these boundary conditions are satisfied exactly, and the 2D basis functions are the tensor products
In other words, we choose one test function per spatial dimension and create global basis functions by taking the outer products (or tensor products) of these individual test functions. Since global basis functions simply are the tensor products of one-dimensional basis functions, it is trivial to move to even higher-dimensional spaces. The multi-dimensional basis functions then form a basis for a multi-dimensional tensor product space. The associated domains are similarly formed by taking Cartesian products of the one-dimensional domains.
The one-dimensional domains are discretized using the quadrature points of the chosen basis functions. If the meshes in \(x\)- and \(y\)-directions are \(x = (x_i)_{i\in \mathcal{I}^N}\) and \(y = (y_j)_{j\in \mathcal{I}^M}\), then a Cartesian product mesh is \(x \times y\). With index and set builder notation it is given as
With shenfun a user chooses the appropriate function spaces (with associated bases) for each dimension of the problem, and may then combine these bases into tensor product spaces and Cartesian product domains. For example, to create the required spaces for the aforementioned domain, with Dirichlet in \(x\)- and periodic in \(y\)-direction, we need the following:
This can be implemented in shenfun as follows:
from shenfun import comm, FunctionSpace, TensorProductSpace
N, M = (16, 16)
XN = FunctionSpace(N+2, 'Chebyshev', bc=(0, 0))
YM = FunctionSpace(M, 'Fourier', dtype='d')
W = TensorProductSpace(comm, (XN, YM))
Note that the Chebyshev space is created using \(N+2\) and not \(N\). The two
chosen boundary condition bc=(0, 0)
ensures that only \(N\) basis
functions will be used, or in other words, that the dimension of XN
is \(N\).
The Fourier basis YM
has been defined for real inputs,
which is ensured by the dtype
keyword being set to d
for double. dtype
specifies the data type that is input to the forward
method, or the
data type of the solution in physical space. Setting
dtype='D'
indicates that this datatype will be complex. Note that it
will not trigger an error, or even lead to wrong results, if dtype
is
by mistake set to D
. It is merely less efficient to work with complex data
arrays where double precision is sufficient. See Sec Getting started
for more information on getting started with using bases.
Shenfun is parallelized with MPI through the mpi4py-fft package.
If we store the current example in filename.py
, then it can be run
with more than one processor, e.g., like:
mpirun -np 4 python filename.py
In this case the tensor product space W_P
will be distributed
with the slab method (since the problem is 2D) and it
can here use a maximum of 9 CPUs. The maximum is 9 since the last dimension is
transformed from 16 real numbers to 9 complex, using the Hermitian symmetry of
real transforms, i.e., the shape of a transformed array in the W_P
space will be
(14, 9). You can read more about MPI in the later section MPI.
Tribute¶
Shenfun is named as a tribute to Prof. Jie Shen, as it contains many tools for working with his modified Chebyshev and Legendre bases, as described here:
Jie Shen, SIAM Journal on Scientific Computing, 15 (6), 1489-1505 (1994) (JS1)
Jie Shen, SIAM Journal on Scientific Computing, 16 (1), 74-87, (1995) (JS2)
Shenfun has implemented classes for the bases described in these papers, and within each class there are methods for fast transforms, inner products and for computing matrices arising from bilinear forms in the spectral Galerkin method.