I need a compact, production-ready data pipeline that pulls three distinct data sets from Amazon and stages them inside my AWS environment. Scope of data • Amazon SP-API – Inventory data only • Amazon Ads API – Campaign data and Keyword reports Pipeline flow Python code (your preferred framework is fine) should authenticate with each API, extract the data incrementally, land the raw files in Amazon S3, then load curated tables in Amazon RedShift. Please include sensible logging, error-handling and token refresh logic so the process can run unattended. Preferred AWS stack S3 will act as the data lake and RedShift as the warehouse. If you want to add Lambda, Glue or Step Functions to orchestrate the jobs, I’m open to it as long as the setup stays lightweight for this MVP. Deliverables - Well-documented Python source code for each connector and loader - S3 folder structure with sample output from a successful run - RedShift DDL plus COPY statements (or equivalent) to create and populate the tables - README covering setup, scheduling, and how to extend the pipeline Acceptance will be based on a full refresh completing end-to-end in my AWS account, with row counts matching Amazon’s own reports and no hard-coded credentials. Let me know any questions about tokens or account limits and we’ll get you unblocked quickly.