AdditiveAttentionΒΆ

Additive attention layer, a.k.a. Bahdanau-style attention.

Abstract Signature:

AdditiveAttention(use_scale: bool = True, dropout: float = 0.0)

PyTorch

API: β€”
Strategy: Custom / Partial

JAX (Core)

API: β€”
Strategy: Custom / Partial

Keras

API: keras.layers.AdditiveAttention
Strategy: Direct Mapping

TensorFlow

API: tf.keras.layers.AdditiveAttention
Strategy: Direct Mapping

Apple MLX

API: β€”
Strategy: Custom / Partial

Flax NNX

API: flax.linen.attention.dot_product_attention
Strategy: Direct Mapping

PaxML / Praxis

API: β€”
Strategy: Custom / Partial