I need a clean, well-commented Python script that crawls a specific publisher’s website, collects the full product description text, and grabs every related image and PDF link it finds. Once gathered, the script should compile everything into a single CSV file: one row per product, columns for the textual description, direct image URLs, and direct PDF URLs. Core expectations • Language & libs: Python 3.x, using requests/BeautifulSoup is ideal; if the site’s layout demands Selenium or Scrapy, that’s perfectly fine—just keep external dependencies minimal and note them clearly. • Data targets: product description text (not generic articles or blog posts), plus all associated images and PDFs. Files themselves don’t have to be downloaded; live links in the CSV are enough. • Output structure: a UTF-8 CSV with headers I can tweak easily. The script should overwrite or append gracefully if rerun. • Reusability: website URL and any login or pagination parameters need to sit at the top of the file as simple variables so I can repoint or extend the crawl later. • Runtime: command-line execution on macOS or Linux, with progress logs printed to console. Hand-off items 1. The .py script, fully commented. 2. A short README showing setup (pip install…), usage, and expected runtime. 3. A sample CSV produced from a small test run so I can verify column order and data integrity. I’m happy to answer any structural questions about the target site before you start.