giagrad.nn.DropoutND#

class giagrad.nn.DropoutND(*args, **kwargs)[source]#

Randomly zeroes a specific dimension of the input tensor with probability p.

During training, the specified dimension is zeroed with probability \(p\), and the remaining elements are scaled up by a factor of \(\frac{1}{1-p}\) to preserve the expected value of the output. During inference, the dropout layer does not modify the input tensor.

In a tensor of shape \((N, C, H, W)\), 1D, 2D or even 3D slices can be zeroed, and this can be specified by paramter dim. If no dimension is supplied it will zero out entire channels by default.

Inherits from: Module.

See also

Dropout

For a more efficient way of zeroing only scalars in any dimension.

train(), eval()

Variables:
  • p (float, default: 0.5) – Probability of that dimension being zeroed.

  • dim (int, optional) – The dimension to be zeroed during training.

Examples

>>> a = Tensor.empty(2, 2, 2, 3).ones()
>>> a
tensor: [[[[1. 1. 1.]
           [1. 1. 1.]]
...
          [[1. 1. 1.]
           [1. 1. 1.]]]
...
...
         [[[1. 1. 1.]
           [1. 1. 1.]]
...
          [[1. 1. 1.]
           [1. 1. 1.]]]]

Setting dim = 1 1D slices will be zeroed in each channel.

>>> dropout = nn.DropoutNd(p=0.5, dim=1)
>>> dropout(a)
tensor: [[[[0. 0. 0.]
           [0. 0. 0.]]
...
          [[2. 2. 2.]
           [2. 2. 2.]]]
...
...
         [[[2. 2. 2.]
           [0. 0. 0.]]
...
          [[2. 2. 2.]
           [0. 0. 0.]]]] grad_fn: DropoutND(p=0.5, dim=1)