Training MAGIC AI

Customer: AI | Published: 01.12.2025
Бюджет: 750 $

Competition: Train the Best MAGIC Base Model — $200 Prize I am launching a competition to create the best-performing custom MAGIC base model. Prize: $200 to the top entry. MAGIC (Memory Augmented Generally Intelligent Cognition) is a modular AI architecture that combines retrieval, memory, transformers, difformers, MoE routing, and multimodal adapters. Competitors will receive: * The MAGIC architecture codebase * magic_trainer.py (modifiable) * magic_chat.py (modifiable) * Core training data that must be included in the final model Your job is to train and submit a high-quality MAGIC base that demonstrates strong reasoning, conversational ability, and benchmark performance. --- Prize $200 USD to the best submission (I may also hire or bonus follow-up work for strong contenders.) --- What You Will Receive After accepting the project, you will get: * `magic_trainer.py` (you may modify freely) * `magic_chat.py` (you may modify freely) * Core training data that must be included * MAGIC architecture code * Instructions for running MAGIC and training modules --- Requirements Your trained MAGIC base must: 1. Include the provided training dataset This must be incorporated into your final training run. 2. Maintain strong conversational ability The model must be able to: * carry intelligent dialogue * reason coherently * stay consistent during multi-turn conversations * answer analytical, technical, or general knowledge questions 3. Provide benchmark results You must include: * Comparisons against at least 1–2 reference models (any open-source LLM of your choice) * A clear summary of performance metrics (perplexity, accuracy, or other relevant metrics) * Optional: qualitative evaluations (conversation transcripts, reasoning tests, etc.) 4. Supply all training modifications You may: * adjust `magic_trainer.py` * choose the datasets * apply your own fine-tuning methods * modify architecture parameters if needed You must provide: * your training script(s) * any additional dataset sources * instructions to reproduce the training run 5. Final Deliverables Each competitor must submit: 1. The trained MAGIC base model weights 2. Benchmarks & evaluation summary 3. Modified trainer.py (if changed) 4. Clear reproduction instructions 5. Any additional training data used (or links to it) --- Judging Criteria Submissions will be evaluated based on: 1. Benchmark performance 2. Conversational intelligence 3. Consistency and reasoning quality 4. How well MAGIC’s architecture is utilized 5. My personal preference during testing --- Communication Contestants may message me at any time with: * questions about the architecture * clarifications on dataset usage * details about allowed modifications * troubleshooting issues --- Summary This competition is ideal for: * Machine learning researchers * Fine-tuning experts * Large language model hobbyists * AI developers wanting to showcase talent * Anyone confident they can train a unique, powerful model The winning model will become the official MAGIC base.