I need a clean, well-documented script that automatically gathers publicly available biographies of football players directly from their official websites and exports everything into a single CSV file. The focus is strictly on biographical details—no match statistics or transfer news for now—so the crawler should identify, parse, and normalise information such as full name, date of birth, nationality, position, current club, height/weight (where listed), and any notable career highlights that appear on the player’s own site. Because these pages vary in structure, the code should be resilient: graceful error handling, user-agent rotation, and clear selectors or XPath rules that are easy for me to extend later. I’m comfortable running Python, so libraries like Requests, BeautifulSoup, Selenium, or Scrapy are welcome; please choose the stack that gives the best balance of speed and maintainability. Deliverable • A runnable script (with a brief README) • The resulting CSV generated from a short test run (5–10 players is fine for proof) • Comments in the code explaining each major step Acceptance • Script executes from the command line without additional setup beyond documented requirements • All target fields are populated where the source site provides them, with empty values left blank rather than throwing errors • Output CSV follows UTF-8 encoding and standard comma separation without stray delimiters If you have prior experience scraping sports or similarly dynamic sites, let me know—otherwise, clear evidence of robust scraping practices will do.