bitorch_engine.layers.qconv.binary.cpp.layer.BinaryConv2dCPP

class bitorch_engine.layers.qconv.binary.cpp.layer.BinaryConv2dCPP(*args, **kwargs)[source]

This class implements a binary convolutional layer in PyTorch, specifically optimized with C++ extensions. It inherits from BinaryConv2dBase to leverage common binary convolution functionalities with added optimizations for efficient computation.

bits_binary_word

Defines the size of the binary word, defaulting to 8 bits.

Type:

int

Methods

__init__

Initializes the BinaryConv2dCPP layer with the given arguments, which are forwarded to the base class.

forward

Defines the forward pass for the binary convolution operation using the quantized weights.

generate_quantized_weight

Generates and stores quantized weights based on the current weights of the layer, utilizing a binary quantization method.

prepare_params

Prepares and initializes the model parameters for training.

Attributes

training

__init__(*args, **kwargs)[source]

Initializes the BinaryConv2dCPP layer with the given arguments, which are forwarded to the base class. Additionally, it sets up the binary word size for quantization.

Parameters:
  • *args – Variable length argument list to be passed to the BinaryConv2dBase class.

  • **kwargs – Arbitrary keyword arguments to be passed to the BinaryConv2dBase class.

forward(x: Tensor) Tensor[source]

Defines the forward pass for the binary convolution operation using the quantized weights.

Parameters:

x (torch.Tensor) – The input tensor for the convolution operation with shape (N, C_in, H, W), where N is the batch size, C_in is the number of channels, and H, W are the height and width of the input tensor.

Returns:

The output tensor of the convolution operation with shape determined by the layer’s

attributes and the input dimensions.

Return type:

torch.Tensor

generate_quantized_weight(qweight_only: bool = False) None[source]

Generates and stores quantized weights based on the current weights of the layer, utilizing a binary quantization method. Quantized weights are stored as a torch.nn.Parameter but are not set to require gradients.

Parameters:

qweight_only (bool) – If True, the original weights are discarded to save memory. Defaults to False.

prepare_params() None[source]

Prepares and initializes the model parameters for training. One can use “prepare_bie_layers” method from project_root.utils.model_helper to call this function.