shenfun.io package

Module contents

class shenfun.io.Checkpoint(filename, checkevery=10, data={})[source]

Bases: object

Class for checkpointing simulations

Checkpoint data are used to store intermediate simulation results, and can be used to restart a simulation at a later stage, with no loss of accuracy.

Data is provided as dictionaries. The checkpoint dictionary is represented as:

data = {
        '0': {'U': [U_hat], 'phi': [phi_hat]},
        '1': {'U': [U0_hat], 'phi': [phi_hat0]},
         ...
        }

which contains solutions to be stored at possibly several different timesteps. The current timestep is 0, previous is 1 and so on if more is needed by the integrator. Note that checkpoint is storing results from spectral space.

static check_if_kill()[source]

Check if user has put a file named killshenfun in running folder.

close()[source]
open(mode='r+')[source]
read(u, name, **kw)[source]
update(t, tstep)[source]
write(step, d)[source]
class shenfun.io.HDF5File(h5name, domain=None, mode='a', **kw)[source]

Bases: FileBase

Class for reading/writing data to HDF5 format

Parameters:
  • h5name (str) – Name of hdf5 file to be created.

  • domain (sequence, optional) – An optional spatial mesh or domain to go with the data. Sequence of either

    • 2-tuples, where each 2-tuple contains the (origin, length) of each dimension, e.g., (0, 2*pi).

    • Arrays of coordinates, e.g., np.linspace(0, 2*pi, N). One array per dimension.

  • mode (str, optional) – r, w or a for read, write or append. Default is a.

  • kw (dict, optional) – Optional additional keyword arguments used when creating the file used to store data.

static backend()[source]

Return which backend is used to store data

open(mode='r+')[source]

Open the self.filename file for reading or writing

Parameters:

mode (str) – Open file in this mode. Default is ‘r+’.

read(u, name, **kw)[source]

Read field name into distributed array u

Parameters:
  • u (array) – The DistArray to read into.

  • name (str) – Name of field to be read.

  • step (int, optional) – Index of field to be read. Default is 0.

write(step, fields, **kw)[source]

Write snapshot step of fields to HDF5 file

Parameters:
  • step (int) – Index of snapshot.

  • fields (dict) – The fields to be dumped to file. (key, value) pairs are group name and either arrays or 2-tuples, respectively. The arrays are complete arrays to be stored, whereas 2-tuples are arrays with associated global slices.

  • as_scalar (boolean, optional) – Whether to store rank > 0 arrays as scalars. Default is False.

Example

>>> from mpi4py import MPI
>>> from mpi4py_fft import PFFT, HDF5File, newDistArray
>>> comm = MPI.COMM_WORLD
>>> T = PFFT(comm, (15, 16, 17))
>>> u = newDistArray(T, forward_output=False, val=1)
>>> v = newDistArray(T, forward_output=False, val=2)
>>> f = HDF5File('h5filename.h5', mode='w')
>>> f.write(0, {'u': [u, (u, [slice(None), 4, slice(None)])],
...             'v': [v, (v, [slice(None), 5, 5])]})
>>> f.write(1, {'u': [u, (u, [slice(None), 4, slice(None)])],
...             'v': [v, (v, [slice(None), 5, 5])]})

This stores data within two main groups u and v. The HDF5 file will in the end contain groups:

/u/3D/{0, 1}
/u/2D/slice_4_slice/{0, 1}
/v/3D/{0, 1}
/v/1D/slice_5_5/{0, 1}

Note

The list of slices used in storing only parts of the arrays are views of the global arrays.

class shenfun.io.NCFile(ncname, domain=None, mode='a', clobber=True, **kw)[source]

Bases: FileBase

Class for writing data to NetCDF4 format

Parameters:
  • ncname (str) – Name of netcdf file to be created

  • domain (Sequence, optional) – An optional spatial mesh or domain to go with the data. Sequence of either

    • 2-tuples, where each 2-tuple contains the (origin, length) of each dimension, e.g., (0, 2*pi).

    • Arrays of coordinates, e.g., np.linspace(0, 2*pi, N). One array per dimension.

  • mode (str) – r, w or a for read, write or append. Default is a.

  • clobber (bool, optional) – If True (default), opening a file with mode=’w’ will clobber an existing file with the same name. If False, an exception will be raised if a file with the same name already exists.

  • kw (dict, optional) – Optional additional keyword arguments used when creating the file used to store data.

Note

Each class instance creates one unique NetCDF4-file, with one step-counter. It is possible to store multiple fields in each file, but all snapshots of the fields must be taken at the same time. If you want one field stored every 10th timestep and another every 20th timestep, then use two different class instances with two different filenames ncname.

static backend()[source]

Return which backend is used to store data

open(mode='r+')[source]

Open the self.filename file for reading or writing

Parameters:

mode (str) – Open file in this mode. Default is ‘r+’.

read(u, name, **kw)[source]

Read field name into distributed array u

Parameters:
  • u (array) – The DistArray to read into.

  • name (str) – Name of field to be read.

  • step (int, optional) – Index of field to be read. Default is 0.

write(step, fields, **kw)[source]

Write snapshot step of fields to NetCDF4 file

Parameters:
  • step (int) – Index of snapshot.

  • fields (dict) – The fields to be dumped to file. (key, value) pairs are group name and either arrays or 2-tuples, respectively. The arrays are complete arrays to be stored, whereas 2-tuples are arrays with associated global slices.

  • as_scalar (boolean, optional) – Whether to store rank > 0 arrays as scalars. Default is False.

Example

>>> from mpi4py import MPI
>>> from mpi4py_fft import PFFT, NCFile, newDistArray
>>> comm = MPI.COMM_WORLD
>>> T = PFFT(comm, (15, 16, 17))
>>> u = newDistArray(T, forward_output=False, val=1)
>>> v = newDistArray(T, forward_output=False, val=2)
>>> f = NCFile('ncfilename.nc', mode='w')
>>> f.write(0, {'u': [u, (u, [slice(None), 4, slice(None)])],
...             'v': [v, (v, [slice(None), 5, 5])]})
>>> f.write(1, {'u': [u, (u, [slice(None), 4, slice(None)])],
...             'v': [v, (v, [slice(None), 5, 5])]})

This stores the following datasets to the file ncfilename.nc. Using in a terminal ‘ncdump -h ncfilename.nc’, one gets:

netcdf ncfilename {
dimensions:
        time = UNLIMITED ; // (2 currently)
        x = 15 ;
        y = 16 ;
        z = 17 ;
variables:
        double time(time) ;
        double x(x) ;
        double y(y) ;
        double z(z) ;
        double u(time, x, y, z) ;
        double u_slice_4_slice(time, x, z) ;
        double v(time, x, y, z) ;
        double v_slice_5_5(time, x) ;
}
shenfun.io.ShenfunFile(name, T, backend='hdf5', mode='r', mesh='quadrature', **kw)[source]

Return a file handler

Parameters:
  • name (str) – Name of file, without ending

  • T (TensorProductSpace) – The space used for the data stored. Can also be CompositeSpace or VectorSpace.

  • backend (str, optional) – hdf5 or netcdf4. Default is hdf5.

  • mode (str, optional) – r or w. Default is r.

  • mesh (str, optional) –

    • ‘quadrature’ - use quadrature mesh of self

    • ‘uniform’ - use uniform mesh for non-periodic bases

Returns:

Instance of either HDF5File or NCFile

Return type:

Class instance

shenfun.io.generate_xdmf(h5filename, periodic=True, order='visit')[source]

Generate XDMF-files

Parameters:
  • h5filename (str) – Name of hdf5-file that we want to decorate with xdmf

  • periodic (bool or dim-sequence of bools, optional) – If true along axis i, assume data is periodic. Only affects the calculation of the domain size and only if the domain is given as 2-tuple of origin+dx.

  • order (str) – paraview or visit For some reason Paraview and Visit requires the mesh stored in opposite order in the XDMF-file for 2D slices. Make choice of order here.