Just Five Mins!
Just Five Mins!
Episode 139 - RAG is Expensive but is it really
0:00
-13:20

Episode 139 - RAG is Expensive but is it really

Well, in unicorns dollars, it is REALLY expensive :)

🧠 What RAG Actually Does

RAG enhances LLMs by retrieving relevant external information (e.g. from documents or databases) at query time, then feeding that into the prompt. This allows the LLM to answer with up-to-date or domain-specific knowledge without retraining.

💸 Is RAG Expensive?

Yes, it can be — especially if:

  • You repeatedly reprocess large documents for every query.

  • You use high token counts to include raw content in prompts.

  • You rely on real-time parsing of files (e.g. PDFs or Excel) without preprocessing.

This is where vector storage and embedding optimization come in.

📦 Role of Vector Storage

Instead of reloading and reprocessing documents every time:

  1. Documents are chunked into smaller segments.

  1. Each chunk is converted into a vector embedding.

  2. These embeddings are stored in a vector database (e.g. FAISS, Pinecone, Weaviate).

  3. At query time, the user’s question is embedded and matched against stored vectors to retrieve relevant chunks.

This avoids reprocessing the original files and drastically reduces cost and latency

⚙️ Efficiency Strategies

Here’s how to make RAG more efficient:

Strategy

Description

Benefit

Vector Storage

Store precomputed embeddings

Avoids repeated parsing and embedding

ANN Indexing

Use Approximate Nearest Neighbor search

Fast retrieval from large datasets

Quantization

Compress embeddings (e.g. float8, int8)

Reduces memory footprint with minimal accuracy loss

Dimensionality Reduction

Use PCA or UMAP to reduce vector size

Speeds up search and lowers storage cost

Contextual Compression

Filter retrieved chunks before sending to LLM

Reduces token usage and cost

Discussion about this episode

User's avatar