I’m setting up a fresh backend that will house our analytics data, and I want it built around MySQL. The work starts with solid database architecture—clean schema design, indexing strategies, partitioning where it makes sense, and clearly documented relationships so future queries remain fast even as volume grows. Once the structure is in place, I need automated pipelines to move data from our various sources into this new store. Think full-stack ETL: extract from APIs or flat files, transform for consistency and quality, then load into the MySQL tables on a defined schedule. I’m open to tools you’re comfortable with—Python scripts, Airflow, Fivetran, or a light-weight custom solution—as long as the result is reliable, monitorable, and easy to extend. Security, backup routines, and a concise hand-off document are part of the brief. If you’ve architected MySQL environments for analytics workloads before and can demonstrate tight, well-tested data pipelines, I’d like to see how you’d approach this. Deliverables (acceptance criteria): • Optimised MySQL schema tailored for analytics data • Automated, version-controlled ETL/ELT pipelines with retry and logging • Basic dashboard or CLI checks showing pipeline health • Backup and restore procedure documented • Short technical guide so my team can maintain and extend the setup