lrn_layer {CNN} | R Documentation |
Because ReLU neurons have unbounded activations and we need LRN to normalize
that. We want to detect high frequency features with a large response. If we
normalize around the local neighborhood of the excited neuron, it becomes even
more sensitive as compared to its neighbors.
At the same time, it will dampen the responses that are uniformly large in any
given local neighborhood. If all the values are large, then normalizing those
values will diminish all of them. So basically we want to encourage some kind
of inhibition and boost the neurons with relatively larger activations. This
has been discussed nicely in Section 3.3 of the original paper by Krizhevsky et al.
lrn_layer(
n = 5);