We are looking for a Senior AI/ML Engineer (LLM Training and Deployment). We are building Health Model, an on-prem "Robust Expert" LLM that generates healthcare interoperability code from natural language. The model should generate Mirth Connect (NextGen Connect) channel XML and JavaScript transformer code and support HL7 v2 and FHIR patterns. No PHI will be used. Only public docs, open-source samples, and synthetic messages. We need a strong developer to implement the end-to-end pipeline: dataset ingestion, synthetic instruction generation, LoRA fine-tuning, validation harness, and local deployment with an OpenAI-compatible API. Target compute is on-prem GPU hardware (DGX Spark class). Optional experiments can run on Google Vertex AI credits. [Required Skills] - LLM fine-tuning: LoRA/QLoRA, quantization, PyTorch, Transformers, Unsloth preferred - Python data pipelines: JSONL, filtering, reproducibility - Mirth Connect (NextGen Connect): channel XML and Rhino JavaScript transformers - Healthcare interoperability: HL7 v2 (PID, PV1, OBR, OBX), FHIR basics - Docker, Linux, GPU environment setup - Bonus: vLLM, OpenAI-compatible serving, Google Vertex AI [Start Plan] Phase 1 is a vertical slice, but we begin by building the dataset first. - Step 1 (first week): Gather and prepare the initial dataset. We will start this now and by one week we will have a small MVP dataset ready. - Step 2 (after MVP dataset): Use that dataset to train a first model that converts natural language requests into Mirth transformer JS for HL7 parsing, with automated validation running in headless NextGen Connect. - Step 3: If Phase 1 is successful, we expand to FHIR conversion and broader integration tasks. [Engagement] Monthly or milestone-based fixed price [Start ASAP] Long-term work if Phase 1 succeeds