SGDΒΆ
Gradient descent (with momentum) optimizer.
Abstract Signature:
SGD(learning_rate: float = 0.01, momentum: float = 0.0, nesterov: boolean = False, weight_decay: float)
Gradient descent (with momentum) optimizer.
Abstract Signature:
SGD(learning_rate: float = 0.01, momentum: float = 0.0, nesterov: boolean = False, weight_decay: float)