Skip to main content

    Attention Mechanism

    The attention mechanism lets a model focus on different parts of its input when producing each part of the output. It is the core of transformer architectures and enables handling long sequences and rich context.

    Share this term

    In Simple Terms

    Think of it as a spotlight: for each word it generates, the model shines a light on the most relevant parts of the input.

    Detailed Explanation

    Instead of compressing everything into a fixed vector, attention computes weights over the input (e.g., which words matter for the current word). That allows the model to reference relevant context anywhere in the sequence. Variants include self-attention (within one sequence) and cross-attention (between two sequences). Attention is why modern LLMs can use long contexts and why they can appear to “focus” on specific passages. It is a key concept for understanding how transformers work.

    Want to Implement AI in Your Business?

    Let's discuss how these AI concepts can drive value in your organization.

    Schedule a Consultation