decode.utils package#

Submodules#

decode.utils.bookkeeping module#

decode.utils.bookkeeping.decode_state()[source]#

Get version tag of decode. If in repo this will get you the output of git describe.

Returns git describe, decode version or decode version with invalid appended.

Return type:

str

decode.utils.calibration_io module#

class decode.utils.calibration_io.SMAPSplineCoefficient(calib_file)[source]#

Bases: object

Loads a calibration file from SMAP and the relevant meta information :param file:

init_spline(xextent, yextent, img_shape, device='cpu', **kwargs)[source]#

Initializes the CubicSpline function

Parameters:
  • xextent

  • yextent

  • img_shape

  • device – on which device to simulate

Returns:

decode.utils.checkpoint module#

class decode.utils.checkpoint.CheckPoint(path)[source]#

Bases: object

Checkpointing intended to resume to an already started training. .. warning:

Checkpointing is not intended for long-term storage of models or other information.
No version compatibility guarantees are given here at all.
Parameters:

path (Union[str, Path]) – filename / path where to dump the checkpoints

property dict#
dump(model_state, optimizer_state, lr_sched_state, step, log=None)[source]#

Updates and saves to file.

classmethod load(path, path_out=None)[source]#
save()[source]#
update(model_state, optimizer_state, lr_sched_state, step, log=None)[source]#

decode.utils.emitter_io module#

class decode.utils.emitter_io.EmitterWriteStream(name, suffix, path, last_index)[source]#

Bases: object

Stream to save emitters when fitting is performed online and in chunks.

Parameters:
  • name (str) – name of the stream

  • suffix (str) – suffix of the file

  • path (Path) – destination directory

  • last_index (str) – either ‘including’ or ‘excluding’ (does 0:500 include or exclude index 500).

  • pythonic (While excluding is) –

  • files. (it is not what is common for saved) –

write(em, ix_low, ix_high)[source]#

Write emitter chunk to file.

decode.utils.emitter_io.get_decode_meta()[source]#
Return type:

dict

decode.utils.emitter_io.load_csv(path, mapping={'bg_cr': 'bg_cr', 'bg_sig': 'bg_sig', 'frame_ix': 'frame_ix', 'phot': 'phot', 'phot_cr': 'phot_cr', 'phot_sig': 'phot_sig', 'x': 'x', 'x_cr': 'x_cr', 'x_sig': 'x_sig', 'y': 'y', 'y_cr': 'y_cr', 'y_sig': 'y_sig', 'z': 'z', 'z_cr': 'z_cr', 'z_sig': 'z_sig'}, skiprows=3, line_em_meta=2, line_decode_meta=1, **pd_csv_args)[source]#

Loads a CSV file which does provide a header.

Parameters:
  • path ((str, Path)) – path to file

  • mapping ((None, dict)) – mapping dictionary with keys at least (‘x’, ‘y’, ‘z’, ‘phot’, ‘id’, ‘frame_ix’)

  • skiprows (int) – number of skipped rows before header

  • line_em_meta (Optional[int]) – line ix where metadata of emitters is present (set None for no meta data)

  • line_decode_meta (Optional[int]) – line ix where decode metadata is present(set None for no decode meta)

  • pd_csv_args – additional keyword arguments to be parsed to the pandas csv reader

Returns:

Tuple of dicts containing

  • emitter_data (dict): core emitter data

  • emitter_meta (dict): emitter meta information

  • decode_meta (dict): decode meta information

Return type:

(dict, dict, dict)

decode.utils.emitter_io.load_h5(path)[source]#

Loads a hdf5 file and returns data, metadata and decode meta.

Returns:

Tuple of dicts containing

  • emitter_data (dict): core emitter data

  • emitter_meta (dict): emitter meta information

  • decode_meta (dict): decode meta information

Return type:

(dict, dict, dict)

decode.utils.emitter_io.load_smap(path, mapping=None)[source]#
Parameters:
  • path – .mat file

  • mapping (optional) – mapping of matlab fields to emitter. Keys must be x,y,z,phot,frame_ix,bg

  • **emitter_kwargs – additional arguments to be parsed to the emitter initialisation

Returns:

Tuple of dicts containing

  • emitter_data (dict): core emitter data

  • emitter_meta (dict): emitter meta information

  • decode_meta (dict): decode meta information

Return type:

(dict, dict, dict)

decode.utils.emitter_io.load_torch(path)[source]#

Loads a torch saved emitterset and returns data, metadata and decode meta.

Returns:

Tuple of dicts containing

  • emitter_data (dict): core emitter data

  • emitter_meta (dict): emitter meta information

  • decode_meta (dict): decode meta information

Return type:

(dict, dict, dict)

decode.utils.emitter_io.save_csv(path, data, metadata)[source]#
Return type:

None

decode.utils.emitter_io.save_h5(path, data, metadata)[source]#
Return type:

None

decode.utils.emitter_io.save_torch(path, data, metadata)[source]#

decode.utils.emitter_trafo module#

decode.utils.emitter_trafo.transform_emitter(em, trafo)[source]#

Transform a set of emitters specified by a transformation dictionary. Returns transformed emitterset.

Parameters:
  • em (EmitterSet) – emitterset to be transformed

  • trafo (dict) – transformation specs

Return type:

EmitterSet

decode.utils.example_helper module#

Supplementary code for code examples (mainly jupyter notebook). Some of this seams utterly less abstract and hard-coded, but it is a dedicated example helper …

decode.utils.example_helper.load_example_package(path, url, hash)[source]#
Parameters:
  • path (Path) – destination where to save example package

  • url (str) –

  • hash (str) – sha 256 hash

decode.utils.example_helper.load_gateway()[source]#

decode.utils.frames_io module#

class decode.utils.frames_io.BatchFileLoader(par_folder, file_suffix='.tif', file_loader=None, exclude_pattern=None)[source]#

Bases: object

Iterates through parent folder and returns the loaded frames as well as the filename in their iterator

Example

>>> batch_loader = BatchFileLoader('dummy_folder')
>>> for frame, file in batch_loader:
>>>     out = model.forward(frame)
Parameters:
  • par_folder (Union[str, Path]) – parent folder in which the files are

  • file_suffix (str) – suffix to search for

  • exclude_pattern (Optional[str]) – specifies excluded patterns via regex string. If that pattern is found anywhere (!) in the

  • path (files) –

  • ingored. (the file will be) –

remove_by_exclude()[source]#

Removes the the files that match the exclude pattern

class decode.utils.frames_io.TiffTensor(file, dtype='float32')[source]#

Bases: object

Memory-mapped tensor. Note that data is loaded only to the extent to which the object is accessed through brackets ‘[ ]’ Therefore, this tensor has no value and no state until it is sliced and then returns a torch tensor. You can of course enforce loading the whole tiff by tiff_tensor[:]

Parameters:
  • file – path to tiff file

  • dtype – data type to which to convert

decode.utils.frames_io.load_tif(path, multifile=True)[source]#

Reads the tif(f) files. When a folder is specified, potentially multiple files are loaded. Which are stacked into a new first axis. Make sure that if you provide multiple files (i.e. a folder) sorting gives the correct order. Otherwise we can not guarantee anything.

Parameters:
  • path ((str, Path)) – path to the tiff / or folder

  • multifile – auto-load multi-file tiff (for large frame stacks). When path is a directory, multifile is

  • disabled. (automatically) –

Returns:

frames

Return type:

torch.Tensor

decode.utils.loader module#

decode.utils.loader.check_file(file, hash=None)[source]#

Checks if a file exists and if the sha256 hash is correct

Parameters:
  • file ((str, Path)) –

  • hash

Returns:

true if file exists and hash is correct (if specified), false otherwise

Return type:

bool

decode.utils.loader.check_load(file, url, hash=None)[source]#

Loads file freshly when check failes

Parameters:
  • file ((str, Path)) –

  • url (str) –

  • hash (Optional[str]) –

decode.utils.loader.load(file, url, hash=None)[source]#

Loads file from URL (and checks hash if present)

Parameters:
  • file ((str, Path)) – path where to store downloaded file

  • url (str) –

  • hash (Optional[str]) –

Return type:

None

decode.utils.model_io module#

class decode.utils.model_io.LoadSaveModel(model_instance, output_file, input_file=None, name_time_interval=3600, better_th=1e-06, max_files=3, state_dict_update=None)[source]#

Bases: object

load_init(device=None)[source]#

Init and warmstart model (if possible) and ship to specified device

Parameters:

device (Union[str, device, None]) –

Returns:

save(model, metric_val=None)[source]#

Save model (conditioned on a better metric if one is provided)

Parameters:
  • model

  • metric_val

decode.utils.model_io.hash_model(modelfile)[source]#

Calculate hash and show it to the user. (https://www.pythoncentral.io/hashing-files-with-python/)

decode.utils.notebooks module#

decode.utils.notebooks.copy_pkg_file(package, file, destination)[source]#

Copies a package file to a destination folder.

decode.utils.notebooks.load_examples(path)[source]#
Parameters:

path (Union[str, Path]) – destination directory

decode.utils.notebooks.parse_args()[source]#

decode.utils.param_io module#

class decode.utils.param_io.ParamHandling[source]#

Bases: object

static convert_param_debug(param)[source]#
convert_param_file(file_in, file_out)[source]#

Simple wrapper to convert file from and to json / yaml

file_extensions = ('.json', '.yml', '.yaml')#
load_params(filename)[source]#

Load parameters from file

Parameters:

filename (str) –

Return type:

RecursiveNamespace

Returns:

write_params(filename, param)[source]#

Write parameter file to path

Parameters:
decode.utils.param_io.add_root_relative(path, root)[source]#

Adds the root to a path if the path is not absolute

Parameters:
  • path (str, Path) – path to file

  • root (str, Path) – root path

Returns:

absolute path to file

Return type:

Path

decode.utils.param_io.autofill_dict(x, reference, mode_missing='include')[source]#

Fill dict x with keys and values of reference if they are not present in x.

Parameters:
  • x (dict) – input dict to be filled

  • reference (dict) – reference dictionary

  • mode_missing (str) –

Return type:

dict

decode.utils.param_io.autoset_scaling(param)[source]#
decode.utils.param_io.copy_reference_param(path)[source]#

Copies our param references to the desired path

Parameters:

path (Union[str, Path]) – destination path, must exist and be a directory

decode.utils.param_io.load_params(file)[source]#
decode.utils.param_io.load_reference()[source]#

Loads the static reference .yaml file because there we have the full sets and default values.

Return type:

dict

decode.utils.param_io.save_params(file, param)[source]#

decode.utils.types module#

class decode.utils.types.RecursiveNamespace(**kwargs)[source]#

Bases: SimpleNamespace

Extension of SimpleNamespace to recursive dictionaries. Inspired by https://dev.to/taqkarim/extending-simplenamespace-for-nested-dictionaries-58e8

keys()[source]#
static map_entry(entry)[source]#
to_dict()[source]#

Module contents#