# Demo - Cubic nonlinear Klein-Gordon equation¶

- Authors
Mikael Mortensen (mikaem at math.uio.no)

- Date
Nov 14, 2019

*Summary.* This is a demonstration of how the Python module shenfun can be used to solve the time-dependent,
nonlinear Klein-Gordon equation, in a triply periodic domain. The demo is implemented in
a single Python file KleinGordon.py, and it may be run
in parallel using MPI. The Klein-Gordon equation is solved using a mixed
formulation. The discretization, and some background on the spectral Galerkin
method is given first, before we turn to the actual details of the `shenfun`

implementation.

## The nonlinear Klein-Gordon equation¶

## Implementation¶

To solve the Klein-Gordon equations we need to make use of the Fourier bases in
`shenfun`

, and these base are found in submodule
`shenfun.fourier.bases`

.
The triply periodic domain allows for Fourier in all three directions, and we
can as such create one instance of this base class using `Basis()`

with
family `Fourier`

for each direction. However, since the initial data are real, we
can take advantage of Hermitian symmetries and thus make use of a
real to complex class for one (but only one) of the directions, by specifying
`dtype='d'`

. We can only make use of the
real-to-complex class for the direction that we choose to transform first with the forward
FFT, and the reason is obviously that the output from a forward transform of
real data is now complex. We may start implementing the solver as follows

```
from shenfun import *
from mpi4py import MPI
import numpy as np
# Set size of discretization
N = (32, 32, 32)
# Create bases
K0 = Basis(N[0], 'F', domain=(-2*np.pi, 2*np.pi), dtype='D')
K1 = Basis(N[1], 'F', domain=(-2*np.pi, 2*np.pi), dtype='D')
K2 = Basis(N[2], 'F', domain=(-2*np.pi, 2*np.pi), dtype='d')
```

We now have three instances `K0`

, `K1`

and `K2`

, corresponding to the space
(13), that each can be used to solve
one-dimensional problems. However, we want to solve a 3D problem, and for this
we need a tensor product space, like (14), created as a tensor
product of these three spaces

```
# Create communicator
comm = MPI.COMM_WORLD
T = TensorProductSpace(comm, (K0, K1, K2), **{'planner_effort':
'FFTW_MEASURE'})
```

Here the `planner_effort`

, which is a flag used by FFTW, is optional. Possibel choices are from the list
(`FFTW_ESTIMATE`

, `FFTW_MEASURE`

, `FFTW_PATIENT`

, `FFTW_EXHAUSTIVE`

), and the
flag determines how much effort FFTW puts in looking for an optimal algorithm
for the current platform. Note that it is also possible to use FFTW wisdom with
`shenfun`

, and as such, for production, one may perform exhaustive planning once
and then simply import the result of that planning later, as wisdom.

The `TensorProductSpace`

instance `T`

contains pretty much all we need for
computing inner products or fast transforms between real and wavenumber space.
However, since we are going to solve for a mixed system, it is convenient to also use the
`MixedTensorProductSpace`

class

```
TT = MixedTensorProductSpace([T, T])
```

We need containers for the solution as well as intermediate work arrays for, e.g., the Runge-Kutta method. Arrays are created as

```
uf = Array(TT) # Solution array in physical space
u, f = uf # Split solution array by creating two views u and f
duf = Function(TT) # Array for right hand sides
du, df = duf # Split into views
uf_hat = Function(TT) # Solution in spectral space
uf_hat0 = Function(TT) # Work array 1
uf_hat1 = Function(TT) # Work array 2
u_hat, f_hat = uf_hat # Split into views
```

The `Array`

class is a subclass of Numpy’s ndarray,
without much more functionality than constructors that return arrays of the
correct shape according to the basis used in the construction. The
`Array`

represents the left hand side of (16),
evaluated on the quadrature mesh. A different type
of array is returned by the `Function`

class, that subclasses both Nympy’s ndarray as well as an internal
`BasisFunction`

class. An instance of the `Function`

represents the entire
spectral Galerkin function (16). As such, it can
be used in complex variational linear forms. For example, if you want
to compute the partial derivative \(\partial u/\partial x\), then this
may be achieved by projection, i.e., find \(u_x \in V^N\) such that
\((u_x-\partial u/\partial x, v) = 0\), for all \(v \in V^N\). This
projection may be easily computed in `shenfun`

using

```
ux = project(Dx(u_hat, 0, 1), T)
```

The following code, on the other hand, will raise an error since you cannot
take the derivative of an interpolated `Array u`

, only a `Function`

```
try:
project(Dx(u, 0, 1), T)
except AssertionError:
print("AssertionError: Dx not for Arrays")
```

### Initialization¶

The solution array `uf`

and its transform `uf_hat`

need to be initialized according to Eq.
(2). To this end it is convenient (but not required, we could just as
easily use Numpy for this as well) to use Sympy, which is a Python library for symbolic
mathamatics.

```
from sympy import symbols, exp, lambdify
x, y, z = symbols("x,y,z")
ue = 0.1*exp(-(x**2 + y**2 + z**2))
ul = lambdify((x, y, z), ue, 'numpy')
X = T.local_mesh(True)
u[:] = Array(T, buffer=ul(*X))
u_hat = T.forward(u, u_hat)
```

Here `X`

is a list of the three mesh coordinates `(x, y, z)`

local to the
current processor. Each processor has its own part of the computational mesh,
and the distribution is handled during the creation of the
`TensorProductSpace`

class instance `T`

. There is no need
to do anything about the `f/f_hat`

arrays since they are already initialized by default to
zero. Note that calling the `ul`

function with the argument `*X`

is the same as
calling with `X[0], X[1], X[2]`

.

### Runge-Kutta integrator¶

A fourth order explicit Runge-Kutta integrator requires only a function that returns the right hand sides of (26) and (27). Such a function can be implemented as

```
# focusing (+1) or defocusing (-1)
gamma = 1
uh = TrialFunction(T)
vh = TestFunction(T)
k2 = -(inner(grad(vh), grad(uh)).scale + gamma)
def compute_rhs(duf_hat, uf_hat, up, Tp, w0):
duf_hat.fill(0)
u_hat, f_hat = uf_hat
du_hat, df_hat = duf_hat
df_hat[:] = k2*u_hat
up = Tp.backward(u_hat, up)
df_hat += Tp.forward(gamma*up**3, w0)
du_hat[:] = f_hat
return duf_hat
```

The code is fairly self-explanatory. `k2`

represents the coefficients in front of
the linear \(\hat{u}\) in (26). The output array is `duf_hat`

, and
the input array is `uf_hat`

, whereas `up`

and `w0`

are work arrays. The array
`duf_hat`

contains the right hand sides of both (26) and
(27), where the linear and nonlinear terms are recognized in the
code as comments `(1)`

and `(2)`

.
The array `uf_hat`

contains the solution at initial and intermediate Runge-Kutta steps.

With a function that returns the right hand side in place, the actual integrator can be implemented as

```
w0 = Function(T)
a = [1./6., 1./3., 1./3., 1./6.] # Runge-Kutta parameter
b = [0.5, 0.5, 1.] # Runge-Kutta parameter
t = 0
dt = 0.01
end_time = 1.0
while t < end_time-1e-8:
t += dt
uf_hat1[:] = uf_hat0[:] = uf_hat
for rk in range(4):
duf = compute_rhs(duf, uf_hat, u, T, w0)
if rk < 3:
uf_hat[:] = uf_hat0 + b[rk]*dt*duf
uf_hat1 += a[rk]*dt*duf
uf_hat[:] = uf_hat1
```

### Complete solver¶

A complete solver is given below, with intermediate plotting of the solution and intermediate computation of the total energy. Note that the total energy is unchanged to 8 decimal points at \(t=100\).

```
from sympy import symbols, exp, lambdify
import numpy as np
import matplotlib.pyplot as plt
from mpi4py import MPI
from time import time
from shenfun import *
comm = MPI.COMM_WORLD
rank = comm.Get_rank()
# Use sympy to set up initial condition
x, y, z = symbols("x,y,z")
ue = 0.1*exp(-(x**2 + y**2 + z**2))
ul = lambdify((x, y, z), ue, 'numpy')
# Size of discretization
N = (64, 64, 64)
# Defocusing or focusing
gamma = 1
K0 = Basis(N[0], 'F', domain=(-2*np.pi, 2*np.pi), dtype='D')
K1 = Basis(N[1], 'F', domain=(-2*np.pi, 2*np.pi), dtype='D')
K2 = Basis(N[2], 'F', domain=(-2*np.pi, 2*np.pi), dtype='d')
T = TensorProductSpace(comm, (K0, K1, K2), slab=False,
**{'planner_effort': 'FFTW_MEASURE'})
TT = MixedTensorProductSpace([T, T])
X = T.local_mesh(True)
uf = Array(TT)
u, f = uf[:]
up = Array(T)
duf = Function(TT)
du, df = duf[:]
uf_hat = Function(TT)
uf_hat0 = Function(TT)
uf_hat1 = Function(TT)
w0 = Function(T)
u_hat, f_hat = uf_hat[:]
# initialize (f initialized to zero, so all set)
u[:] = ul(*X)
u_hat = T.forward(u, u_hat)
uh = TrialFunction(T)
vh = TestFunction(T)
k2 = -inner(grad(vh), grad(uh)).scale - gamma
count = 0
def compute_rhs(duf_hat, uf_hat, up, T, w0):
global count
count += 1
duf_hat.fill(0)
u_hat, f_hat = uf_hat[:]
du_hat, df_hat = duf_hat[:]
df_hat[:] = k2*u_hat
up = T.backward(u_hat, up)
df_hat += T.forward(gamma*up**3, w0)
du_hat[:] = f_hat
return duf_hat
def energy_fourier(comm, a):
result = 2*np.sum(abs(a[..., 1:-1])**2) + np.sum(abs(a[..., 0])**2) + np.sum(abs(a[..., -1])**2)
result = comm.allreduce(result)
return result
# Integrate using a 4th order Rung-Kutta method
a = [1./6., 1./3., 1./3., 1./6.] # Runge-Kutta parameter
b = [0.5, 0.5, 1.] # Runge-Kutta parameter
t = 0.0
dt = 0.005
end_time = 1.
tstep = 0
if rank == 0:
plt.figure()
image = plt.contourf(X[1][..., 0], X[0][..., 0], u[..., 16], 100)
plt.draw()
plt.pause(1e-4)
t0 = time()
K = np.array(T.local_wavenumbers(True, True, True))
TV = VectorTensorProductSpace([T, T, T])
gradu = Array(TV)
while t < end_time-1e-8:
t += dt
tstep += 1
uf_hat1[:] = uf_hat0[:] = uf_hat
for rk in range(4):
duf = compute_rhs(duf, uf_hat, up, T, w0)
if rk < 3:
uf_hat[:] = uf_hat0 + b[rk]*dt*duf
uf_hat1 += a[rk]*dt*duf
uf_hat[:] = uf_hat1
if tstep % 100 == 0:
uf = TT.backward(uf_hat, uf)
ekin = 0.5*energy_fourier(T.comm, f_hat)
es = 0.5*energy_fourier(T.comm, 1j*K*u_hat)
eg = gamma*np.sum(0.5*u**2 - 0.25*u**4)/np.prod(np.array(N))
eg = comm.allreduce(eg)
gradu = TV.backward(1j*K*u_hat, gradu)
ep = comm.allreduce(np.sum(f*gradu)/np.prod(np.array(N)))
ea = comm.allreduce(np.sum(np.array(X)*(0.5*f**2 + 0.5*gradu**2
- (0.5*u**2 - 0.25*u**4)*f))/np.prod(np.array(N)))
if rank == 0:
image.ax.clear()
image.ax.contourf(X[1][..., 0], X[0][..., 0], u[..., 16], 100)
plt.pause(1e-6)
plt.savefig('Klein_Gordon_{}_real_{}.png'.format(N[0], tstep))
print("Time = %2.2f Total energy = %2.8e Linear momentum %2.8e Angular momentum %2.8e" %(t, ekin+es+eg, ep, ea))
comm.barrier()
print("Time ", time()-t0)
```

- PSS18
Ambrish Pandey, Janet D. Scheel, and Jörg Schumacher. Turbulent superstructures in rayleigh-bénard convection.

*Nature Communications*, 9(1):2118, 2018. doi:10.1038/s41467-018-04478-0.