Intelligent Search

[DELIVERABLES]
PLATFORM
[IMPACT]
Knowledge Fusion
[ROLE]
Solution Architect
[SERVICES]
RAG • OBSERVABILITY • AI
[THE STORY]

This project started as a practical need: make scattered documents (e.g. manuals, LLDs , reports, etc.) searchable and usable during day-to-day work.

Solution is intelliSearch, ahybrid retrieval flow (BM25 + embeddings) with clean citations, and a structure that can scale from personal docs to team knowledge bases.

The result is a fast, explainable search experience that feels like an internal assistant—grounded, auditable, and easy to extend.

[SOLUTION]

A hybrid RAG platform combining structured chunking, fast retrieval, and citation-ready answers—optimized for reliability and scale.

GITHUB
[ARCHITECTURE]
Python
Docker
TypeScript
LocalAI
Vector store
Microservices
[DEVELOPMENT SPECIFICATIONS]
CHUNK SIZE
500 tokens
OVERLAP
120 tokens
CHUNK STRATEGY
Fixed-size
VECTOR STORE
Chroma (HNSW)
KEYWORD SEARCH
BM25 (OpenSearch)
RERANKING
Hybrid scoring (BM25 + cosine)
RETREVAL
Neighbours Aware
[USER FLOW]
USER FLOW

User submits question → Hybrid retrieval (BM25 + vector search) → Context building → LLM generates answer → Answer returned with citations

[DETAILS]
  • Upload / ingest documents
  • Chunking + embeddings + indexing
  • Hybrid retrieval (BM25 + vector search)
  • Rerank + build context + generate answer
[WORKING DEMO]

Short walkthrough showing ingestion → search → cited answers in real-time.