I need a Python-based automation that will crawl selected U.S. government sites carrying construction bids, RFPs and RFQs, extract the bid title, submission deadline and the published contact details, then drop those fields into a clean CSV or Excel file. Once you hand it over I want to be able to run it myself on a weekly schedule, so please keep external dependencies light, document any required libraries (requests/BeautifulSoup, Selenium or Scrapy—whatever you choose), and include clear instructions for adding or removing target URLs in the future. Error handling for time-outs or layout changes is important; a simple log file will do. Deliverables: • Well-commented Python script(s) producing CSV/XLSX output • Sample run with at least one government source showing the three requested fields correctly populated • README that explains setup, weekly scheduling and how to extend the crawler If you can also suggest a cron or Task Scheduler snippet for the weekly trigger that would be a plus.