AdamWΒΆ

Optimizer that implements the AdamW algorithm.

Abstract Signature:

AdamW(learning_rate: float = 0.001, weight_decay: float = 0.004, beta_1: float = 0.9, beta_2: float = 0.999, epsilon: float = 1e-07, amsgrad: boolean = False)

PyTorch

API: torch.optim.AdamW
Strategy: Direct Mapping

JAX (Core)

API: optax.adamw
Strategy: Direct Mapping

Keras

API: keras.optimizers.AdamW
Strategy: Direct Mapping

TensorFlow

API: tf.keras.optimizers.AdamW
Strategy: Direct Mapping

Apple MLX

API: mlx.optimizers.AdamW
Strategy: Direct Mapping

Flax NNX

API: optax.adamw
Strategy: Direct Mapping