ai-ml pattern

RAG Systems

Retrieval-Augmented Generation - ground LLM responses in your own data. Essential for enterprise AI.

Time

O(log n) retrieval with vector DB

Space

O(n) for embeddings storage

🧠Mental Model

An open-book exam - the LLM can look up answers in your documents before responding.

Verbal cue: Retrieve relevant context, then generate informed answers.

🎯Recognition Triggers

When you see these patterns in a problem, consider this approach:

RAGknowledge basechat with documentsenterprise AIgrounded responses

💡Interview Tips

  • 1Know trade-offs: chunk size, overlap, embedding model
  • 2Mention hybrid search: vector + keyword
  • 3Discuss evaluation: retrieval accuracy, answer quality

⚠️Common Mistakes

  • Chunks too large (bad retrieval) or too small (lost context)
  • Not handling "no relevant context" cases
  • Ignoring metadata for filtering