I have a collection of 2D images and I want to turn them into fully-formed 3D assets through an end-to-end machine-learning pipeline. The core of the job is the implementation of a 3D model-generation algorithm that learns directly from those images, trains efficiently, and outputs clean meshes that I can open and edit inside Blender. I already use Blender extensively, so the solution should finish with a .blend-ready result and, ideally, expose a small Python script that lets me batch-process new image sets straight from the command line. Whether you build on PyTorch, TensorFlow or another deep-learning framework is up to you, as long as the training code is reproducible and the final meshes import without issues. Acceptable completion criteria • Training script with clear parameters and README • Inference script that turns an unseen folder of 2D images into a watertight mesh (OBJ or FBX). • One sample model generated from my test images and opened in Blender without manual fixes. • Short note on hardware requirements and any third-party libraries used. If you have previous work in single-view or multi-view 3D reconstruction, point-cloud generation, or NeRF-style approaches, that will help us move faster.