I need a reliable scraping solution that pulls a specific set of data—different from the usual contact details, product listings, or news articles—from 1 to 5 target websites. The scrape has to run automatically every day and drop the results into an easily reusable format (CSV or a lightweight database is fine). You are free to choose the stack you are most comfortable with; Python plus BeautifulSoup, Scrapy, or Selenium is perfectly acceptable as long as it handles the sites’ structure and any client-side rendering they may use. A small scheduling layer (cron job, Windows Task Scheduler, or a serverless trigger such as AWS Lambda) will be required so I don’t have to start the job manually. Deliverables: • Clean, well-commented source code for each site • A daily schedule set up and proven to run without manual intervention • Sample output that demonstrates all requested data fields are captured • Simple read-me style instructions so I can adjust credentials, target URLs, or run the scraper locally if needed I will supply the exact URLs and field mapping once we begin. Please ensure the solution is robust to minor layout changes and complies with each site’s robots.txt and terms of service.