SGDΒΆ

Gradient descent (with momentum) optimizer.

Abstract Signature:

SGD(learning_rate: float = 0.01, momentum: float = 0.0, nesterov: boolean = False, weight_decay: float)

PyTorch

API: torch.optim.SGD
Strategy: Direct Mapping

JAX (Core)

API: optax.sgd
Strategy: Direct Mapping

Keras

API: keras.optimizers.SGD
Strategy: Direct Mapping

TensorFlow

API: tf.keras.optimizers.SGD
Strategy: Direct Mapping

Apple MLX

API: mlx.optimizers.SGD
Strategy: Direct Mapping

Flax NNX

API: optax.sgd
Strategy: Direct Mapping