I want to automate the collection of articles that analyse why startup businesses collapse. The first phase covers ten sites—mainly well-known business news outlets and respected entrepreneur blogs—and should gather up to 1,000 relevant pieces in total. To keep the workflow clean, the scraper must create a standalone configuration and output file for each site; once the process proves reliable I will quickly expand the list. Content focus Articles must revolve around startup failure themes, specifically financial issues, market competition, management mistakes, operational issues, legal issues, wrong timing and tech issues. Feel free to refine keyword variations, but stay within that conceptual scope so the data matches the structure laid out in the document I attached. Data requirements • Output for every run goes into a simple text-based format—either raw JSON or plain text—mirroring the schema in the attachment. • One file per site; no mixing sources. • Code should be easy to re-point at new domains when I add them later. When you reply, let me know: 1. The stack or libraries you intend to use (Python + BeautifulSoup, Scrapy, Puppeteer, etc.). 2. How many calendar days you need per site and for the full batch. 3. A clear price broken down per site and for all ten together. 4. Any limits or anti-bot measures you anticipate and how you will handle them. Delivery is the running scraper code, the first set of 1,000 articles organised as described, and concise setup notes so I can extend it.