decode.neuralfitter.inference package#

Submodules#

decode.neuralfitter.inference.inference module#

class decode.neuralfitter.inference.inference.Infer(model, ch_in, frame_proc, post_proc, device, batch_size='auto', num_workers=0, pin_memory=False, forward_cat='emitter')[source]#

Bases: object

Convenience class for inference.

Parameters:
  • model – pytorch model

  • ch_in (int) – number of input channels

  • frame_proc – frame pre-processing pipeline

  • post_proc – post-processing pipeline

  • device (Union[str, device]) – device where to run inference

  • batch_size (Union[int, str]) – batch-size or ‘auto’ if the batch size should be determined automatically (only use in combination with cuda)

  • num_workers (int) – number of workers

  • pin_memory (bool) – pin memory in dataloader

  • forward_cat (Union[str, Callable]) – method which concatenates the output batches. Can be string or Callable.

  • EmitterSet (Use 'em' when the post-processor outputs an) –

  • if (or 'frames' when you don't use post-processing or) –

  • frames. (the post-processor outputs) –

forward(frames)[source]#

Forward frames through model, pre- and post-processing and output EmitterSet

Parameters:

frames (Tensor) –

Return type:

EmitterSet

static get_max_batch_size(model, frame_size, limit_low, limit_high)[source]#

Get maximum batch size for inference.

Parameters:
  • model (Module) – model on correct device

  • frame_size (Union[tuple, Size]) – size of frames (without batch dimension)

  • limit_low (int) – lower batch size limit

  • limit_high (int) – upper batch size limit

class decode.neuralfitter.inference.inference.LiveInfer(model, ch_in, *, stream, time_wait=5, safety_buffer=20, frame_proc=None, post_proc=None, device='cpu', batch_size='auto', num_workers=0, pin_memory=False, forward_cat='emitter')[source]#

Bases: Infer

Inference from memmory mapped tensor, where the mapped file is possibly live being written to.

Parameters:
  • model – pytorch model

  • ch_in (int) – number of input channels

  • stream – output stream. Will typically get emitters (along with starting and stopping index)

  • time_wait – wait if length of mapped tensor has not changed

  • safety_buffer (int) – buffer distance to end of tensor to avoid conflicts when the file is actively being

  • to (written) –

  • frame_proc – frame pre-processing pipeline

  • post_proc – post-processing pipeline

  • device (Union[str, device]) – device where to run inference

  • batch_size (Union[int, str]) – batch-size or ‘auto’ if the batch size should be determined automatically (only use in combination with cuda)

  • num_workers (int) – number of workers

  • pin_memory (bool) – pin memory in dataloader

  • forward_cat (Union[str, Callable]) – method which concatenates the output batches. Can be string or Callable.

  • EmitterSet (Use 'em' when the post-processor outputs an) –

  • if (or 'frames' when you don't use post-processing or) –

  • frames. (the post-processor outputs) –

forward(frames)[source]#

Forward frames through model, pre- and post-processing and output EmitterSet

Parameters:

frames (Union[Tensor, TiffTensor]) –

decode.neuralfitter.inference.pred_tif module#

class decode.neuralfitter.inference.pred_tif.PredictEval(*args, **kwargs)[source]#

Bases: ABC

evaluate()[source]#

Eval the whole thing. Implement your own method if you need to modify something, e.g. px-size to get proper RMSE-vol values. Then call super().evaluate() :return:

forward(output_raw=False)[source]#
Parameters:

output_raw (bool) – save and output the raw frames

Returns:

emitterset (and raw frames if specified).

forward_raw()[source]#

Forwards the data through the model but without post-processing

Returns: raw_frames (torch.Tensor)

class decode.neuralfitter.inference.pred_tif.PredictEvalSimulation(*args, **kwargs)[source]#

Bases: PredictEval

class decode.neuralfitter.inference.pred_tif.PredictEvalTif(tif_stack, activations, model, post_processor, frame_proc, evaluator=None, device='cuda', batch_size=32, frame_window=3)[source]#

Bases: PredictEval

init_dataset(frames=None)[source]#

Initiliase the dataset. Usually by preloaded frames but you can overwrite. :type frames: :param frames: N C(=1) H W :return:

static load_csv(activation_file, verbose=False)[source]#
load_tif_csv()[source]#

Module contents#