decode.utils package#
Submodules#
decode.utils.bookkeeping module#
decode.utils.calibration_io module#
decode.utils.checkpoint module#
- class decode.utils.checkpoint.CheckPoint(path)[source]#
Bases:
object
Checkpointing intended to resume to an already started training. .. warning:
Checkpointing is not intended for long-term storage of models or other information. No version compatibility guarantees are given here at all.
- Parameters:
path (
Union
[str
,Path
]) – filename / path where to dump the checkpoints
- property dict#
decode.utils.emitter_io module#
- class decode.utils.emitter_io.EmitterWriteStream(name, suffix, path, last_index)[source]#
Bases:
object
Stream to save emitters when fitting is performed online and in chunks.
- Parameters:
name (
str
) – name of the streamsuffix (
str
) – suffix of the filepath (
Path
) – destination directorylast_index (
str
) – either ‘including’ or ‘excluding’ (does 0:500 include or exclude index 500).pythonic (While excluding is) –
files. (it is not what is common for saved) –
- decode.utils.emitter_io.load_csv(path, mapping={'bg_cr': 'bg_cr', 'bg_sig': 'bg_sig', 'frame_ix': 'frame_ix', 'phot': 'phot', 'phot_cr': 'phot_cr', 'phot_sig': 'phot_sig', 'x': 'x', 'x_cr': 'x_cr', 'x_sig': 'x_sig', 'y': 'y', 'y_cr': 'y_cr', 'y_sig': 'y_sig', 'z': 'z', 'z_cr': 'z_cr', 'z_sig': 'z_sig'}, skiprows=3, line_em_meta=2, line_decode_meta=1, **pd_csv_args)[source]#
Loads a CSV file which does provide a header.
- Parameters:
path ((
str
,Path
)) – path to filemapping ((
None
,dict
)) – mapping dictionary with keys at least (‘x’, ‘y’, ‘z’, ‘phot’, ‘id’, ‘frame_ix’)skiprows (
int
) – number of skipped rows before headerline_em_meta (
Optional
[int
]) – line ix where metadata of emitters is present (set None for no meta data)line_decode_meta (
Optional
[int
]) – line ix where decode metadata is present(set None for no decode meta)pd_csv_args – additional keyword arguments to be parsed to the pandas csv reader
- Returns:
Tuple of dicts containing
emitter_data (dict): core emitter data
emitter_meta (dict): emitter meta information
decode_meta (dict): decode meta information
- Return type:
(dict, dict, dict)
- decode.utils.emitter_io.load_h5(path)[source]#
Loads a hdf5 file and returns data, metadata and decode meta.
- Returns:
Tuple of dicts containing
emitter_data (dict): core emitter data
emitter_meta (dict): emitter meta information
decode_meta (dict): decode meta information
- Return type:
(dict, dict, dict)
- decode.utils.emitter_io.load_smap(path, mapping=None)[source]#
- Parameters:
path – .mat file
mapping (optional) – mapping of matlab fields to emitter. Keys must be x,y,z,phot,frame_ix,bg
**emitter_kwargs – additional arguments to be parsed to the emitter initialisation
- Returns:
Tuple of dicts containing
emitter_data (dict): core emitter data
emitter_meta (dict): emitter meta information
decode_meta (dict): decode meta information
- Return type:
(dict, dict, dict)
- decode.utils.emitter_io.load_torch(path)[source]#
Loads a torch saved emitterset and returns data, metadata and decode meta.
- Returns:
Tuple of dicts containing
emitter_data (dict): core emitter data
emitter_meta (dict): emitter meta information
decode_meta (dict): decode meta information
- Return type:
(dict, dict, dict)
decode.utils.emitter_trafo module#
- decode.utils.emitter_trafo.transform_emitter(em, trafo)[source]#
Transform a set of emitters specified by a transformation dictionary. Returns transformed emitterset.
- Parameters:
em (
EmitterSet
) – emitterset to be transformedtrafo (
dict
) – transformation specs
- Return type:
decode.utils.example_helper module#
Supplementary code for code examples (mainly jupyter notebook). Some of this seams utterly less abstract and hard-coded, but it is a dedicated example helper …
decode.utils.frames_io module#
- class decode.utils.frames_io.BatchFileLoader(par_folder, file_suffix='.tif', file_loader=None, exclude_pattern=None)[source]#
Bases:
object
Iterates through parent folder and returns the loaded frames as well as the filename in their iterator
Example
>>> batch_loader = BatchFileLoader('dummy_folder') >>> for frame, file in batch_loader: >>> out = model.forward(frame)
- Parameters:
par_folder (
Union
[str
,Path
]) – parent folder in which the files arefile_suffix (
str
) – suffix to search forexclude_pattern (
Optional
[str
]) – specifies excluded patterns via regex string. If that pattern is found anywhere (!) in thepath (files) –
ingored. (the file will be) –
- class decode.utils.frames_io.TiffTensor(file, dtype='float32')[source]#
Bases:
object
Memory-mapped tensor. Note that data is loaded only to the extent to which the object is accessed through brackets ‘[ ]’ Therefore, this tensor has no value and no state until it is sliced and then returns a torch tensor. You can of course enforce loading the whole tiff by tiff_tensor[:]
- Parameters:
file – path to tiff file
dtype – data type to which to convert
- decode.utils.frames_io.load_tif(path, multifile=True)[source]#
Reads the tif(f) files. When a folder is specified, potentially multiple files are loaded. Which are stacked into a new first axis. Make sure that if you provide multiple files (i.e. a folder) sorting gives the correct order. Otherwise we can not guarantee anything.
- Parameters:
path ((
str
,Path
)) – path to the tiff / or foldermultifile – auto-load multi-file tiff (for large frame stacks). When path is a directory, multifile is
disabled. (automatically) –
- Returns:
frames
- Return type:
torch.Tensor
decode.utils.loader module#
- decode.utils.loader.check_file(file, hash=None)[source]#
Checks if a file exists and if the sha256 hash is correct
- Parameters:
file ((
str
,Path
)) –hash –
- Returns:
true if file exists and hash is correct (if specified), false otherwise
- Return type:
bool
decode.utils.model_io module#
- class decode.utils.model_io.LoadSaveModel(model_instance, output_file, input_file=None, name_time_interval=3600, better_th=1e-06, max_files=3, state_dict_update=None)[source]#
Bases:
object
- decode.utils.model_io.hash_model(modelfile)[source]#
Calculate hash and show it to the user. (https://www.pythoncentral.io/hashing-files-with-python/)
decode.utils.notebooks module#
- decode.utils.notebooks.copy_pkg_file(package, file, destination)[source]#
Copies a package file to a destination folder.
decode.utils.param_io module#
- class decode.utils.param_io.ParamHandling[source]#
Bases:
object
- convert_param_file(file_in, file_out)[source]#
Simple wrapper to convert file from and to json / yaml
- file_extensions = ('.json', '.yml', '.yaml')#
- load_params(filename)[source]#
Load parameters from file
- Parameters:
filename (
str
) –- Return type:
Returns:
- write_params(filename, param)[source]#
Write parameter file to path
- Parameters:
filename (
Union
[str
,Path
]) –param (
Union
[dict
,RecursiveNamespace
]) –
- decode.utils.param_io.add_root_relative(path, root)[source]#
Adds the root to a path if the path is not absolute
- Parameters:
path (str, Path) – path to file
root (str, Path) – root path
- Returns:
absolute path to file
- Return type:
Path
- decode.utils.param_io.autofill_dict(x, reference, mode_missing='include')[source]#
Fill dict x with keys and values of reference if they are not present in x.
- Parameters:
x (
dict
) – input dict to be filledreference (
dict
) – reference dictionarymode_missing (
str
) –
- Return type:
dict
- decode.utils.param_io.copy_reference_param(path)[source]#
Copies our param references to the desired path
- Parameters:
path (
Union
[str
,Path
]) – destination path, must exist and be a directory
decode.utils.types module#
- class decode.utils.types.RecursiveNamespace(**kwargs)[source]#
Bases:
SimpleNamespace
Extension of SimpleNamespace to recursive dictionaries. Inspired by https://dev.to/taqkarim/extending-simplenamespace-for-nested-dictionaries-58e8