I have already shipped the SMS, chat, and voice infrastructure for Inboxx—what’s missing is the intelligence layer. I need an AI engineer who can bolt a brain onto the existing REST and WebSocket endpoints without touching the wider stack. Here’s what the role looks like: • Natural Language Processing for message understanding, voice-to-text transcription, and text-to-speech synthesis—think Whisper or similar for the audio path, transformers for text. • Automated response generation paired with a suggestion mode so agents can either let the system reply on their behalf or accept/edit a draft. • Training will start with our own archive of customer conversations (texts and calls). You’ll handle cleaning, labeling where needed, fine-tuning, and continuous improvement loops. Key milestones and acceptance: 1. Proof-of-concept model that can classify intent and generate draft replies from a held-out slice of our data at >90 % accuracy. 2. Voice pipeline that transcribes live calls under one second latency and can read generated replies back in a natural voice. 3. API endpoints or lightweight micro-service that exposes /suggest and /auto-reply for chat, plus /transcribe and /synthesize for voice, returning JSON we can wire straight into the existing backend. 4. Handoff doc covering model architecture, retraining workflow, and how to swap providers (OpenAI, Cohere, local models, etc.) so we’re not locked in. No front-end, no DevOps—just the intelligence layer. If you live and breathe Python, PyTorch/TensorFlow, and have shipped conversational AI that actually ships, I’d love to see it in action.