AI Code Review & Validation

Замовник: AI | Опубліковано: 20.01.2026
Бюджет: 80 $

I’m running a technical evaluation project that compares AI-generated code against real-world engineering standards. I need an experienced software engineer who can dive into each snippet, reason through the logic, and spot design or implementation flaws that automated tools miss. Your day-to-day work will revolve around: • Evaluating AI-generated code for correctness, readability, and maintainability. • Executing the code to confirm results and expose edge-case failures. • Providing structured, written feedback explaining what works, what breaks, and why—highlighting system-design concerns, engineering trade-offs, and best-practice deviations. Deliverables I’m expecting - A concise review for each code sample, detailing issues, recommended fixes, and any alternative designs you would propose. - A test log or output showing you ran the code, including edge-case results (and performance notes when relevant). - A short summary tying observations back to broader software-engineering principles to improve future outputs. Acceptance criteria • Every review must include at least one concrete improvement (or explicitly state none were found) with a clear justification. • Execution notes must be reproducible with clear steps on a typical local dev setup. • Feedback should remain professional, specific, and actionable. This is fully remote and task-based. I’ll share batches of code and you return reviews on an agreed schedule. If you enjoy nuanced problem-solving across code reasoning, system design, and software engineering best practices, this should be a great fit.