bitorch_engine.utils.model_helper.load_checkpoint

bitorch_engine.utils.model_helper.load_checkpoint(model: Module, checkpoint_path: str, qweight_only: bool = True) None[source]

Loads a checkpoint into a given model. This function first applies weight packing to the model if quantized weights are used, then loads the model’s state dict from the checkpoint path provided. This is particularly useful for models that use quantized weights, allowing the option to load only the quantized weights for inference or both quantized and unpacked weights for further training.

Parameters:
  • model – The model into which the checkpoint will be loaded. This model should use quantized layers if qweight_only is set to True.

  • checkpoint_path – The file path to the checkpoint from which the model state will be loaded.

  • qweight_only – A boolean flag indicating whether to pack and load only the quantized weights (True) or to also consider unpacked weights which can be useful for resuming training (False). Default is True, which means only quantized weights are considered.