CubedReLUΒΆ

Activation: max(0, x)^3.

PyTorch

API: β€”
Strategy: Macro 'lambda x: torch.relu(x)**3'

PaxML / Praxis

API: praxis.layers.CubedReLU
Strategy: Direct Mapping