AI Chat Automation Dashboard

Заказчик: AI | Опубликовано: 30.09.2025
Бюджет: 150 $

I’m looking for a developer who can deliver an end-to-end admin control dashboard that sits on our Linode Ubuntu box and takes over the heavy lifting of chat management during live streams. Core objective The first milestone is clear: automate chat responses. Incoming messages from the stream (Twitch, YouTube, Facebook Live, etc.) should be ingested, run through an AI layer that handles language translation, and pushed back as timely, context-aware replies. The solution must support multiple underlying models so we can switch or ensemble them without touching the front end. Real-time overlays via OBS When certain keywords or AI-detected events appear, the system should fire an overlay through the OBS WebSocket API. No vMix work is required for now; the architecture just needs to stay modular so we can add it later if we choose. WebSocket automation & portal All chat traffic, bot replies, and overlay triggers should flow through a WebSocket channel so the front end stays live without constant refreshes. A secure admin portal (any modern stack is fine: React, Vue, Svelte, etc.) must allow me to: • view current chats and bot replies in real time • switch models on the fly • enable/disable translation per channel • inspect basic analytics (message volume, response time, overlays triggered) Server environment • Ubuntu 22.04 LTS on Linode • HTTPS enforced (Let’s Encrypt is fine) • Source managed in a private Git repo • Docker-compose preferred for easy spin-up, but I’m open to alternatives if you make a solid case Deliverables 1. Production-ready codebase with install script or Docker Compose file 2. Front-end and back-end connected through secure WebSockets 3. AI module that performs language translation and returns the chosen response 4. OBS overlay trigger module with at least one sample overlay event 5. README covering setup, environment variables, and future extension points Acceptance criteria • I can log in, watch chat flow in, see translated bot replies, and confirm overlays trigger inside OBS. • CPU/RAM usage stays within reasonable limits on a 4-GB Linode instance during a 30-minute test stream. • All tests pass in CI and the server deploys cleanly from the repo. If you’re confident in WebSocket design, AI integration, and OBS automation, I’m ready to get this moving quickly.