bitorch_engine.utils.model_helper.prepare_bie_layers

bitorch_engine.utils.model_helper.prepare_bie_layers(model: Module, layers=None) None[source]

Prepares binary and n-bit quantized layers within a given model for training or inference. This function iterates over the modules of the model and calls prepare_params on those which are instances of the specified quantized layer classes. This preparation step is essential for initializing or transforming parameters specific to quantized operations.

Parameters:
  • model (torch.nn.Module) – The model containing the layers to be prepared.

  • layers (list, optional) – A list of layer classes to be prepared. If not provided, defaults to a predefined list of binary and n-bit quantized layer classes, including both convolutional and linear layers, as well as binary embedding layers.

The function imports necessary classes from the bitorch_engine package, focusing on binary and n-bit implementations of convolutional layers, linear layers, and embedding layers. If no specific layers are provided, it defaults to a comprehensive list of available quantized layer types. Each layer in the model that matches a type in the layers list will have its prepare_params method called, allowing for any necessary parameter initialization or adjustments before the model is used.

This is particularly useful for models that utilize quantized layers, ensuring that all such layers are correctly set up for either training or deployment.