decode.neuralfitter.inference package#
Submodules#
decode.neuralfitter.inference.inference module#
- class decode.neuralfitter.inference.inference.Infer(model, ch_in, frame_proc, post_proc, device, batch_size='auto', num_workers=0, pin_memory=False, forward_cat='emitter')[source]#
Bases:
object
Convenience class for inference.
- Parameters:
model – pytorch model
ch_in (
int
) – number of input channelsframe_proc – frame pre-processing pipeline
post_proc – post-processing pipeline
device (
Union
[str
,device
]) – device where to run inferencebatch_size (
Union
[int
,str
]) – batch-size or ‘auto’ if the batch size should be determined automatically (only use in combination with cuda)num_workers (
int
) – number of workerspin_memory (
bool
) – pin memory in dataloaderforward_cat (
Union
[str
,Callable
]) – method which concatenates the output batches. Can be string or Callable.EmitterSet (Use 'em' when the post-processor outputs an) –
if (or 'frames' when you don't use post-processing or) –
frames. (the post-processor outputs) –
- forward(frames)[source]#
Forward frames through model, pre- and post-processing and output EmitterSet
- Parameters:
frames (
Tensor
) –- Return type:
- static get_max_batch_size(model, frame_size, limit_low, limit_high)[source]#
Get maximum batch size for inference.
- Parameters:
model (
Module
) – model on correct deviceframe_size (
Union
[tuple
,Size
]) – size of frames (without batch dimension)limit_low (
int
) – lower batch size limitlimit_high (
int
) – upper batch size limit
- class decode.neuralfitter.inference.inference.LiveInfer(model, ch_in, *, stream, time_wait=5, safety_buffer=20, frame_proc=None, post_proc=None, device='cpu', batch_size='auto', num_workers=0, pin_memory=False, forward_cat='emitter')[source]#
Bases:
Infer
Inference from memmory mapped tensor, where the mapped file is possibly live being written to.
- Parameters:
model – pytorch model
ch_in (
int
) – number of input channelsstream – output stream. Will typically get emitters (along with starting and stopping index)
time_wait – wait if length of mapped tensor has not changed
safety_buffer (
int
) – buffer distance to end of tensor to avoid conflicts when the file is actively beingto (written) –
frame_proc – frame pre-processing pipeline
post_proc – post-processing pipeline
device (
Union
[str
,device
]) – device where to run inferencebatch_size (
Union
[int
,str
]) – batch-size or ‘auto’ if the batch size should be determined automatically (only use in combination with cuda)num_workers (
int
) – number of workerspin_memory (
bool
) – pin memory in dataloaderforward_cat (
Union
[str
,Callable
]) – method which concatenates the output batches. Can be string or Callable.EmitterSet (Use 'em' when the post-processor outputs an) –
if (or 'frames' when you don't use post-processing or) –
frames. (the post-processor outputs) –
- forward(frames)[source]#
Forward frames through model, pre- and post-processing and output EmitterSet
- Parameters:
frames (
Union
[Tensor
,TiffTensor
]) –
decode.neuralfitter.inference.pred_tif module#
- class decode.neuralfitter.inference.pred_tif.PredictEval(*args, **kwargs)[source]#
Bases:
ABC
- evaluate()[source]#
Eval the whole thing. Implement your own method if you need to modify something, e.g. px-size to get proper RMSE-vol values. Then call super().evaluate() :return:
- class decode.neuralfitter.inference.pred_tif.PredictEvalSimulation(*args, **kwargs)[source]#
Bases:
PredictEval
- class decode.neuralfitter.inference.pred_tif.PredictEvalTif(tif_stack, activations, model, post_processor, frame_proc, evaluator=None, device='cuda', batch_size=32, frame_window=3)[source]#
Bases:
PredictEval