I need a production-ready ETL workflow that pulls data from several operational databases and a pair of REST APIs, cleans and standardises it, then delivers it to an analytics-ready target. The warehousing layer is already in place; what’s missing is the automated extraction, transformation and load logic that can run unattended and be trusted in a live environment. The core tasks are: • Secure connectivity to PostgreSQL, MySQL and the specified APIs, with credential management handled through environment variables or a secrets manager. • Repeatable, version-controlled transformations written in the language or framework you know best—Python with Pandas, SQL-heavy procedures, or Spark are all acceptable as long as the code is clean and well documented. • Daily incremental loads plus on-demand backfills, orchestrated through a scheduler such as Airflow, Prefect or an equivalent you propose. • Robust logging, error handling and basic data-lineage metadata so every row can be traced from source to target. Acceptance criteria 1. End-to-end ETL job runnable via Docker (or similar container) with a single command. 2. Source-controlled repo and README explaining setup, configuration and run steps. 3. Short hand-over call or screencast walking me through design decisions and confirming the pipeline runs cleanly on my side. When you reply, please concentrate on your direct experience building database- and API-driven ETL processes; that evidence will weigh heaviest in my decision.