I need a reliable scraping script that can pull structured data—primarily HTML tables—from a target website and load the results straight into a database. The final dataset must be complete, tidy, and ready for querying. While the focus is on table-based information, the site also contains a few supporting images that I want captured in JPEG format and referenced correctly in the data you store. I’m flexible on language and tooling, though Python with BeautifulSoup / Scrapy, Node with Cheerio, or similar frameworks are welcome so long as the code is clean, well-commented, and can be rerun without manual tweaks when the site updates. Deliverables: • A working scraper with source code • A populated database (MySQL, PostgreSQL, or SQLite—use what best fits; include the schema) • Stored JPEG images in a logical folder structure, with paths reflected in the database • Brief setup/run instructions and any required dependencies I’ll test by running the script on my side and checking that all table rows, columns, and image links import correctly. Looking forward to your approach and timeline.