I am building an AI-driven system that continuously gathers data, cleans it on the fly, runs live machine-learning models, and surfaces clear visual insights and alerts through an intuitive interface. The goal is to combine real-time monitoring with intelligent prediction so that end users can spot issues or opportunities the moment they arise and understand why they matter. What I need from you spans the full development cycle: solid, well-commented code for data ingestion, preprocessing, model training, evaluation, and automated deployment; guidance on architecture and design choices as we iterate; and a polished, presentation-ready knowledge base that explains every component for future hand-off or scaling. A responsive web dashboard (built, for instance, with React or another modern framework) should graph live metrics, highlight anomalies, and push configurable alerts. On the back end, I expect a robust Python stack—think Pandas, scikit-learn, TensorFlow or PyTorch, plus Flask/FastAPI—to keep models retrainable and the API endpoints fast. Containerisation with Docker and optional orchestration with Kubernetes will help us stay cloud-agnostic and ready for load. Deliverables • Fully functional source code with README and inline documentation • Deployed demo (cloud or local Docker compose) showing real-time data flow, predictions, and alerting • User-friendly dashboard with charts, logs, and alert controls • Technical report and slide deck covering data pipeline, model choice, metrics, and deployment strategy I’m aiming for an accurate, efficient, and easily scalable solution, so thoughtful design decisions and clean code matter as much as raw model performance. Let’s discuss timeline, data specifics, and any additional tool preferences you recommend before we kick off.