I have a working Python + Flask CLI application that ingests GIS data—most often large SHP files—and runs a series of automated fibre-planning analyses. Right now every command executes one after another, so the tool spends far too much time waiting on itself, especially during the heaviest step: web-scraping supplementary data. Your brief: • Refactor the processing pipeline so tasks run in true batch mode and in parallel, not the current sequential flow. • Streamline the scraping logic to cut total run-time; I’m open to multiprocessing, asyncio, queue workers or any approach that makes measurable sense. Because the business relies on near-real-time results, I need improvements delivered ASAP. I’ll consider the work complete when I can run the CLI, point it at a directory of SHP files, and see parallel workers kick in with clear logging plus before-and-after benchmarks that prove the speed-up. You’ll be working directly in my existing Git repo. The codebase is clean, uses standard Python libraries along with Flask-CLI, Fiona, Shapely and Requests. Please bring experience with concurrency patterns, I/O-bound optimisation and, ideally, GIS data handling. If this is in your wheelhouse, let’s get started right away.