Stateful AI Agent for Knowledge Extraction
-
Updated
Dec 21, 2025 - Python
Stateful AI Agent for Knowledge Extraction
CrawlLama 🦙 is an local AI agent that answers questions via Ollama and integrates web- and RAG-based research.
Advanced multi-agent Medical AI Assistant powered by LangGraph that delivers empathetic, doctor-like responses using a hybrid pipeline of LLM reasoning, RAG from medical PDFs, and intelligent fallback tools. Features Long-term memory with SQLite, dynamic tool routing, and state reasoning for reliable, context-aware consultation.
Repository to host example code for the ARK Agent.
RecallNest: MCP-native, local-first memory workbench for AI conversations
PromptWeaver: RAG Edition helps design effective prompts for Traditional, Hybrid, and Agentic RAG systems. It offers templates, system prompts, and best practices to improve accuracy, context use, and LLM reasoning.
A local Retrieval-Augmented Generation (RAG) system for answering questions about TouchDesigner using wiki pages, tutorials, and other structured or semi-structured content. Powered by FAISS and local LLMs via Ollama.
Local-first, evidence-backed memory sidecar for AI agents.
Code to make any AI have unlimited context persistent memory. In the example, a software for any AI to read the Uniform Commercial Code of Michigan. A document of 220,000 tokens
Sub-linear knowledge retrieval via quantum-inspired hyperdimensional folded space (0.88ms @ 100% accuracy)
Notebook examples for using OpenAI's Assistants API with the file search (knowledge retrieval) functionality.
OllamaMulti-RAG 🚀 is a multimodal AI chat app combining Whisper AI for audio, LLaVA for images, and Chroma DB for PDFs, enhanced with Ollama and OpenAI API. 📄 Built for AI enthusiasts, it welcomes contributions—features, bug fixes, or optimizations—to advance practical multimodal AI research and development collaboratively.
⚡️ Local RAG API using FastAPI + LangChain + Ollama | Upload PDFs, DOCX, CSVs, XLSX and ask questions using your own documents — fully offline!
AI-powered support copilot for ticket classification and query resolution. RAG, Chroma DB, Streamlit. Atlan AI Engineer Internship.
Local Retrieval-Augmented Generation (RAG) system built with FastAPI, integrating vector search, Elasticsearch, and optional web search to power LLM-based intelligent question answering using models like Mistral or GPT-4.
Self-healing AI research engine with grounded RAG, FinOps cost tracking, and resilient API fallback powered by Gemini.
Scientific Agent: A Retrieval-Augmented Generation (RAG) System for Domain-Aware Literature Review Automation
Production RAG pipeline — grounded retrieval, source-cited answers, Precision@k + MRR eval. CLI + Flask REST API. Gemini · ChromaDB · Python 3.11+
🚀 Revolutionize your data interaction with a cutting-edge chatbot built on Retrieval-Augmented Generation (RAG) and OpenAI’s GPT-4. Upload documents, create custom knowledge bases, and get precise, contextual answers. Ideal for research, business operations, customer support, and more!
An end-to-end multi-source knowledge retrieval system using LangChain, FAISS, and OpenAI embeddings. This Retrieval-Augmented Generation (RAG) pipeline intelligently searches across Wikipedia, arXiv, and custom websites, optimizing source selection and delivering precise, real-time results based on query relevance.
Add a description, image, and links to the knowledge-retrieval topic page so that developers can more easily learn about it.
To associate your repository with the knowledge-retrieval topic, visit your repo's landing page and select "manage topics."