Python Library Task Orchestration

Заказчик: AI | Опубликовано: 04.03.2026
Бюджет: 1500 $

I need help steering an LLM through a realistic engineering workflow on an open-source Python library. Your first step will be to choose a mature, well-documented GitHub repo that ships as a library or package and can be cloned and built with nothing more exotic than standard pip dependencies. Once the repo is agreed upon, you will frame a single, well-bounded engineering assignment for the model—perhaps a small feature, a non-trivial refactor, or a robust test suite—something neither superficial nor sprawling. From there you will run an iterative review loop: prompt the model, inspect its MR, request changes, and continue until the contribution is truly production-ready. That means the patch must follow project style, cover edge cases, include unit/functional tests, update docs, and pass CI. Treat every round as a real code review with meaningful commit messages and clear rationale. When the code is merged (or would be merge-worthy), wrap up with a comparative critique of two distinct model answers, judging them on correctness, style, test coverage, performance, and review responsiveness. Finish by recording the conversation UUID and task summary in the provided spreadsheet. Deliverables: • GitHub repo link and brief justification for its selection • The final PR (or equivalent patch) demonstrating the completed task • All dialogue with the model showing the review cycles • A comparative evaluation of the two model attempts with reasoned scoring across the agreed axes • Spreadsheet entry containing conversation UUID and task description Proficiency with Python tooling—pytest, flake8, black, tox or poetry—will make the job smoother, and a solid eye for code quality is essential.