I’m looking for a clean, well-documented Python solution that automatically crawls https://allegro.pl/magazyn-allegro, pulls every company name together with its tax identification number (NIP), and lets me run the process whenever I want from a Windows machine. The emphasis is on delivering the actual scraper program rather than a one-off data file; I need to own the code so future runs are effortless. A simple CSV export is fine for the saved results, but the core deliverable is the script (or a small packaged .exe) plus concise setup instructions. To keep expectations clear, the total project value is already agreed at €200. I’m happy to break that into sensible milestones—one on successful data capture from a representative set of pages, the final on full site coverage and hand-over of source code with comments. Deliverables: • Ready-to-run Python script (Windows-friendly) • README covering dependencies and usage • Sample CSV demonstrating company name + NIP correctly captured The code should respect polite scraping practices (reasonable delays, error handling) and cope with pagination or dynamic content if present. Once I can run it locally, generate the CSV, and cross-check a handful of entries, the job’s done.