I need a developer who can take this idea from concept to a working, browser-based product. The core objective is to enrol users with a unique “voice fingerprint” and then let the system verify or identify them whenever they speak again. Scope • Capture voice directly in the browser (WebRTC or similar) and accept uploaded WAV files. • Run real-time audio preprocessing—noise reduction, silence trimming, level normalisation—before any feature extraction. • Extract robust speaker features (e.g., MFCC, x-vectors, or your preferred state-of-the-art approach) and store them as a compact voiceprint tied to each account. • Provide two modes: – Verification: one-to-one match that returns a confidence score. – Identification: one-to-many search that ranks candidates with similarity scores. • Deliver a simple, clean web interface: microphone record button, enrolment dashboard, and a results panel that shows scores and pass/fail flags. • Expose the core logic through a REST or GraphQL API so it can later be integrated into other services. • Security best practices: HTTPS, encrypted storage of templates, and user consent prompts prior to recording. Acceptance criteria • Enrolment and verification must run end-to-end in under five seconds on a typical broadband connection. • Equal Error Rate (EER) ≤ 5 % on a public speaker dataset or a comparable internal test set. • Clear documentation (setup, model training pipeline, API endpoints) plus a Dockerised deployment script. Tech stack is open, but Python (TensorFlow/PyTorch), Node.js, or Rust are all fine as long as you can justify the choice and meet the performance targets. Please outline your proposed approach, libraries you favour (Kaldi, SpeechBrain, etc.), and any similar work you have shipped.