I need to pull every business listing from one specific website into an initial database for our new directory. The target pages are publicly accessible but spread across multiple category and pagination levels, so the script has to crawl through all of them, follow each profile URL, and extract complete details. The data I require from every listing includes: business name, full contact information (address and phone), a concise description of services, any social-media links that appear, the main web URL, and an email address when it shows up. I expect the final dataset delivered in a single, tidy Excel file—cleaned for duplicates and with consistent field headers so we can import it straight into our back-end. Please build the scraper in whichever stack you know will stay reliable (Python + BeautifulSoup/Scrapy, Node + Puppeteer, etc.); just keep the code readable so we can rerun it later. Handle polite request throttling or rotating headers to avoid being blocked, and document any environment variables or libraries needed. Acceptance will be based on: • Complete coverage of every listing page on the site • All specified fields present in the sheet, blank only when truly unavailable • No duplicate rows and accurate alignment of data to the correct columns • Delivery of both the Excel file and the runnable script with brief usage notes