Weights of Layers

Initialize and Freeze

Initialize weight#

pan_in: degree of tensor of input

pan_out: degree of tensor of output

Freeze layers#

# Use children
for name, child in model_finetune.named_children():
    if name == 'features':
        for child_parm in child.parameters():
            child_parm.requires_grad = False

# Use modules <class 'torch.nn.modules.*'>
for layer in model_finetune.features:
    layer.requires_grad_(False)

# Use parameters <class 'torch.nn.parameter.Parameter'>
for parm in model_finetune.features.parameters():
    parm.requires_grad = False

# module requires_grad_
model_finetune.features.requires_grad_(False)

Paramter에 접근할 경우 paramter.requries_grad 변수에 False

Layer에 접근할 경우 Layer.requires_grad_() 함수를 이용

# torch/nn/modules/module.py
def requires_grad_(self: T, requires_grad: bool = True) -> T:
        r"""Change if autograd should record operations on parameters in this
        module.
        This method sets the parameters' :attr:`requires_grad` attributes
        in-place.
        This method is helpful for freezing part of the module for finetuning
        or training parts of a model individually (e.g., GAN training).
        See :ref:`locally-disable-grad-doc` for a comparison between
        `.requires_grad_()` and several similar mechanisms that may be confused with it.
        Args:
            requires_grad (bool): whether autograd should record operations on
                                  parameters in this module. Default: ``True``.
        Returns:
            Module: self
        """
        for p in self.parameters():
            p.requires_grad_(requires_grad)
        return self