giagrad.nn.Dropout#

class giagrad.nn.Dropout(*args, **kwargs)[source]#

Randomly sets some of the input tensor elements to zero during training using a Bernoulli distribution with a probability of p.

Each elements is independently zeroed out every time it is called. This technique is effective for regularization and preventing the co-adaptation of neurons, as explained in the paper titled Improving neural networks by preventing co-adaptation of feature detectors.

Additionally, during training, the output is scaled by a factor of \(\frac{1}{1-p}\). During evaluation, the module performs an identity function.

Variables:

p (float, default: 0.5) – Probability of each element to be zeroed.

Examples

>>> a = Tensor.empty(2, 3).ones()
>>> dropout = nn.Dropout(p=0.5)
>>> dropout(a)
tensor: [[0. 2. 0.]
        [2. 0. 0.]] grad_fn: Dropout(p=0.5)