This Repo contains all Qdrant based RAG Evaluation Reference material , notebooks and contents.
This repository contains various implementations of Retrieval Augmented Generation (RAG) using QDRANT, an open-source vector search engine. The project showcases different approaches to build RAG with QDRANT for efficient and effective information retrieval and generation. Additionally, the repository includes comprehensive evaluation tools to assess the performance of the implemented RAG application.
-
workshop-rag-eval-qdrant-arize : RAG implementation showcasing Naive (using Dense Vectors) vs Hybrid RAG (using Sparse and Dense Vectors) through Qdrant and Llamaindex and evaluating it using Arize Phoenix. Youtube : https://www.youtube.com/watch?v=m_J0nFmnrPI
-
workshop-rag-eval-qdrant-quotient: RAG implementation showcasing Naive RAG implemented using Qdrant and Langchain, incrementally evaluated and improved through rapid experimentation with Chunk size , Embedding model and LLM using Quotient AI.
Youtube: https://www.youtube.com/watch?v=3MEMPZR1aZA
Article: https://qdrant.tech/articles/rapid-rag-optimization-with-qdrant-and-quotient/ -
workshop-rag-eval-qdrant-quotient-advance-hybrid-with-rerankers: RAG implementation showcasing Naive RAG and Hybrid RAG implemented using Qdrant and Langchain and Hybrid RAg implemented using Llamaindex incrementally evaluated and improved through rapid experimentation with rerankers from MixedBread , Jina Colbert and Cohere using Quotient AI.
Youtube: https://www.youtube.com/watch?v=DId2KP8Ykz4
Presented at AI Engineer World Fair: https://www.ai.engineer/worldsfair/2024/schedule/navigating-rag-optimization-with-an-evaluation-driven-compass -
workshop-rag-eval-qdrant-ragas: RAG implementation showcasing Naive RAG implemented using Qdrant and Langchain , experimentation and effects of Retrieval Window size evaluated through RAGAS.
Article : https://superlinked.com/vectorhub/articles/retrieval-augmented-generation-eval-qdrant-ragas -
workshop-rag-eval-qdrant-ragas-haystack : RAG implementation showing Naive RAG implemented using Qdrant and Haystack , experimentation and improvement through MixedBread AI Embedding model and Reranker model + Retrieval Window Size , evaluated through RAGAS.
YouTube : https://www.youtube.com/watch?v=6NTZqpc4V-k -
workshop-rag-eval-qdrant-ragas-DSPy: RAG implementation showcasing Naive RAG implemented using Qdrant, Langchain and DSPy, experimentation and effects of Retrieval Window size evaluated through RAGAS.
-
agentic_rag_with_unify/notebook : RAG implementation showing Naive RAG implementation using Qdrant and improved through using Agent Routing and Unify.
-
workshop-rag-eval-qdrant-deepeval/notebook : RAG implementation showing Naive RAG implementation using Qdrant and evaluated through Deepeval.
-
tracing rag with Langtrace : RAG implementation showing Naive RAG implementation using Qdrant as the vector database and Langtrace for tracing operations.
-
Advance RAG workshop x Oxford : Topics covering below :
- Vector Types from the Angle of similarity scores
- Naive RAG with Langchain and RAGAS - Building and evaluating a naive RAG using Qdrant and RAGAS.
- Self Query RAG - Building a self query RAG on a Winemag dataset
- RAG with DSPy and RAGAS - RAG with DSPy with both Chain of Thought and ReACT methods.
- Advanced Hybrid Search and RAG Notebook with evaluation - Covered techniques with Advanced Hybrid Search using
Dense + BM25
,Dense + Splade
andDense + BM25 + Colbert with RRF
and evaluating the approaches.
-
synthetic_qna/notebook : Showing synthetic evaluation question generation or checkout https://www.fiddlecube.ai/
Each example is integrated with QDRANT to leverage its powerful vector search capabilities. Detailed instructions and code examples for each integration are provided in the respective directories.
We provide a suite of RAG evaluation tools to assess the performance of the implemented RAG models. These tools are designed to measure various aspects of the RAG systems, ensuring a thorough and robust evaluation process.
The RAGs are built using source dataset containing Qdrant’s documentation and evaluated using Evaluation dataset.
Follow the instructions in the respective directories to run the RAG implementations and perform evaluations using the provided tools.
We welcome contributions from the community! If you have any improvements or new RAG Evaluation tool cookbook to add, please submit a pull request or open an issue.
We would like to thank the contributors and the open-source community for their support and collaboration.