My Blackbox.ai workspace at https://www.blackbox.ai/share/b031f327-7c5e-4ee4-a228-050d1e0e7044 already hosts the early experiments for a machine-learning pipeline that classifies image data. The prototype currently loads a modest training set, applies minimal augmentation, and feeds it into a basic CNN. Accuracy plateaus quickly, so the next steps are to tighten the data-prep workflow, tune the architecture, and ship a repeatable training routine that reaches production-ready performance. What needs to happen • Curate and augment the image dataset, ensuring balanced classes and clear train/validation/test splits. • Redesign or refine the network—think transfer learning with EfficientNet or a custom ResNet variation implemented in PyTorch or TensorFlow/Keras. • Integrate early-stopping, learning-rate scheduling, and experiment tracking (e.g., TensorBoard or Weights & Biases). • Export a lightweight, versioned model file plus a clean inference script that takes a folder of images and returns class labels and confidence scores. • Document the environment setup, dependencies, and training commands in a concise README. Acceptance criteria 1. Top-1 validation accuracy meets or exceeds 92 % without overfitting (verified via the held-out test split). 2. Inference on a GPU-less machine processes 100 images in under 30 seconds. 3. All code runs from a single requirements.txt or environment.yml, mirrors results in the shared Blackbox.ai space, and follows PEP 8 style guidelines. With this in place I’ll have a polished, transferable image-classification model ready for deployment or further fine-tuning.