Me Text: I need an app that works on iOS and Android, so that when you take a picture of your food, the AI detects what you ate, how many calories, proteins, carbohydrates, etc., and adds it to your daily intake, and you can also see your weekly/monthly intake AI Text: I’m building a cross-platform food diary that makes logging effortless: the user simply snaps a picture of a meal, the app identifies each item with high accuracy, returns a full macro and micronutrient breakdown, and immediately adds the data to that day’s intake. A timeline inside the app then aggregates everything so the user can review daily, weekly and monthly consumption at a glance. What I need from you is the complete iOS and Android implementation, including the computer-vision pipeline (TensorFlow Lite, Core ML or a comparable on-device solution), a lightweight back-end to sync and store entries, and a clean Flutter or React Native interface where the running totals update in real time. The detection engine must reliably distinguish multiple foods on one plate and surface calories, protein, carbs, fat and key vitamins with detailed precision; rough estimates won’t meet the brief. On delivery I expect: • Production-ready source code and build files for both stores • A trained model or documented pipeline that I can retrain with additional datasets • Basic UX for historical views (day, week, month) and manual edit of misidentified foods • Short test report showing accuracy on at least 50 mixed meals If you have prior experience combining mobile development with food recognition models, let’s talk about your proposed stack and timeline so we can get an MVP in users’ hands quickly.