Organizations have vast amounts of knowledge locked in documents, wikis, and databases. Traditional search fails because it can't understand context or intent.
LLMs hallucinate when they don't have the right information. Vector-only search misses exact technical terms. Keyword search doesn't understand semantics.
InfoLens solves this with Hybrid Search + Agentic RAG + Multi-LLM flexibility.
The best of both worlds: keyword precision meets semantic understanding
Finds exact matches
PostgreSQL Full-Text
50-100msUnderstands meaning
pgvector similarity
150-250msRefines ordering
Cross-encoder
100-200msChoose the right balance of speed, cost, and quality for each query
User selects a collection. System searches once, retrieves context, sends to LLM. Predictable, fast, low token usage.
LLM has access to tools (list collections, search). It autonomously decides which collections to search, can search multiple sources, and synthesizes a comprehensive answer.
LangGraph workflow. Agent searches, grades results for relevance, and if poor, rewrites the query and searches again. Iterates up to 3 times until quality threshold is met.
Question: "How does our authentication work across projects?"
Simple RAG: Requires you to select "Backend Docs" collection. Fast but limited to one source.
MCP Tools: Automatically finds "Project A", "Project B", "Project C" collections, searches all three, compares results.
Agentic RAG: Same as MCP but if results are vague, rewrites to "JWT authentication implementation comparison" and searches again for better results.
Use any LLM provider. Switch instantly. No vendor lock-in.
Provider configurations stored in PostgreSQL. Admins can add, test, and activate providers through the UI.
Each provider specifies its chat model and embedding model. The system automatically uses the active provider.
llm_providers table → get_active_provider() → chat_model + embeddings
Your knowledge base becomes a universal tool for any AI
Anthropic's open protocol for connecting AI models to external tools and data. InfoLens exposes 10 tools via Server-Sent Events (SSE).
Claude Desktop, Cursor IDE, or any MCP-compatible client can search your knowledge base, create collections, and manage documents.
Scenario: Using Claude Desktop
You: "Search my company docs for authentication best practices"
Claude: *calls list_collections()* → sees "Backend Docs", "Security Policies"
Claude: *calls search_documents("backend-docs", "authentication")* → gets 5 results
Claude: "Based on your backend documentation, you use JWT with RS256 signing..."
Your knowledge base is now accessible to any MCP-compatible AI tool.
Open-source stack, production-ready
Your data stays on your infrastructure. Period.
Deploy on your own servers. On-premise, private cloud, or VPS. You control the infrastructure.
No proprietary vector database. Standard PostgreSQL with pgvector extension. Export anytime.
JWT tokens, bcrypt password hashing, role-based access control (user/admin).