Python Parallel SHP Processor

Замовник: AI | Опубліковано: 14.01.2026

I already have a working Python + Flask CLI tool that processes GIS data (mostly large SHP files) and runs fibre-planning analysis on them. The problem is that everything runs one step after another, so the total execution time is much higher than it should be — especially during the web-scraping part, which is the slowest. What I want is to speed things up by running tasks in parallel instead of sequentially. Ideally, I should be able to point the CLI to a folder full of SHP files and see multiple workers processing them at the same time, with clear logs showing what’s running where. I’m open to whatever makes the most sense here — multiprocessing, asyncio, worker queues, or a hybrid approach — as long as it actually reduces total runtime in a measurable way. The scraping logic in particular needs to be optimized since it’s mostly I/O-bound and currently wastes a lot of time waiting. The end goal is: Optimize web scraping time so jobs run faster See parallel execution Get clear logging per worker Have before/after timing numbers that prove the speed-ups.