I need an automated script that visits Walmart.ca under a specific store location (set by postal code) and captures every product available across all categories. The scraper has to run once every 24 hours, overwrite data in a structured file (CSV or JSON—your call), and handle the usual road-blocks on large sites such as store-selection prompts, pagination, lazy-loaded content, captchas, and IP throttling. What I expect from you • A clean, well-commented Python solution—Scrapy, Selenium, Playwright or a comparable stack are all fine as long as it is headless and low-maintenance. • A simple config or .env so I can change the target postal code or output path without touching the code. • A scheduler (cron job, Windows Task Scheduler, or a lightweight cloud function) that calls the scraper daily. • One sample dataset generated from a test run on the live site so I can confirm the fields, category coverage, and location scoping work correctly. • Brief hand-off notes explaining installation, dependencies, and how to add fields later. Acceptance criteria 1. Running the script with the provided postal code produces a complete dump of all Walmart.ca categories for that store. 2. The job finishes without manual captcha intervention and respects site rate limits. 3. Daily schedule fires automatically and writes/updates the output file. 4. Code passes a quick review for readability and dependency hygiene. That’s it—if you’ve built resilient e-commerce scrapers before, this should be straightforward.