LLM-Based Internal Automation Chatbot

Заказчик: AI | Опубликовано: 30.03.2026
Бюджет: 11 $

I need an experienced AI/ML consultant to architect and build a Retrieval-Augmented Generation (RAG) chatbot that streamlines internal workflows. The main goal is task automation: the bot should pull information from our proprietary data sources, reason over it with a large language model, and execute or trigger the relevant internal processes without human hand-holding. Core requirements • Retrieval + LLM pipeline: select, fine-tune or prompt-engineer the model, set up vector storage, and design the retrieval logic so answers always reflect our latest internal knowledge. • Secure integration with our internal systems (REST and a small GraphQL surface). Authentication must respect our existing SSO. • Clear separation between data ingestion, model inference, and orchestration layers so we can maintain and scale each piece independently. • Deployment scripts (Docker/Kubernetes preferred) and concise hand-off documentation. Acceptance criteria – Bot reliably completes the agreed task scenarios end-to-end in our staging environment. – Latency under two seconds for typical queries. – No hallucinations on our benchmark set; fallback escalation when confidence drops. – All code, configs, and instructions compile and run on a fresh machine. If you have recent hands-on experience with RAG, vector DBs such as Pinecone/FAISS, and enterprise integrations, I’m ready to review your approach and timeline.