nucleon_elastic_ff.data.h5io

Module provides h5 file interafaces.

nucleon_elastic_ff.data.h5io.assert_h5files_equal(actual: str, expected: str, atol: float = 0.0, rtol: float = 1e-07, group_actual: Optional[str] = None, group_expected: Optional[str] = None)[source]

Reads to HDF5 files, compares if they have equal datasets.

Checks if for each entry |actual - expected| < atol + rtol * |expected| (uses numpy.testing.assert_allclose).

Arguments
actual: str
File name for actual input data.
expected: str
File name for expected input data.
atol: float = 0.0
Absolute error tolarance. See numpy assert_allcolse.
rtol: float = 1.0e-7
Relative error tolarance. See numpy assert_allcolse.
Raises
AssertionError:
If datasets are different (e.g., not present or actual data is different.)
nucleon_elastic_ff.data.h5io.create_dset(h5f: h5py._hl.files.File, key: str, data: Any, overwrite: bool = False)[source]

Creates or overwrites (if requested) dataset in HDF5 file.

Arguments
h5f: h5py.File
The file to write to.
key: str
The name of the dataset.
data: Any
The data for the dataset.
overwrite: bool = False
Wether data shall be overwritten.
nucleon_elastic_ff.data.h5io.get_dset_chunks(dset: h5py._hl.dataset.Dataset, chunk_size: int) → Iterable[numpy.ndarray][source]

Returns components of data sliced in chunks determined by the chunk size.

This reduces the memory size when loading the array.

Argumets
dset: h5py.Dataset
Input data set to read.
chunk_size: int
Size of the chunks to load in. Slices the first dimension of the input dataset. Must be smaller or equal to the size of the first data set dimension.
nucleon_elastic_ff.data.h5io.get_dsets(container: Union[h5py._hl.files.File, h5py._hl.group.Group], parent_name: Optional[str] = None, load_dsets: bool = False, ignore_containers: Optional[List[str]] = None) → Dict[str, Union[h5py._hl.dataset.Dataset, numpy.ndarray]][source]

Access an HDF5 container and extracts datasets.

The method is iteratively called if the container contains further containers.

Arguments
container: Union[h5py.File, h5py.Group]
The HDF5 group or file to iteratetively search.
parent_name: Optional[str] = None
The name of the parent container.
load_dsets: bool = False
If False, data sets are not opened (lazy load). If True, returns Dict with numpy arrays as values.
ignore_containers: Optional[List[str]] = None
A list of HDF5 containers to ignore when iteratively solving. Can be regex expressions.
Returns
datasets: Dict[str, Union[h5py.Dataset, np.ndarray]]
A dictionary containing the full path HDF path (e.g., groupA/subgroupB) to the data set as keys and the unloaded values of the set as values.