I’m developing an ontology-driven large language model whose core task is robust knowledge representation aimed at seamless information retrieval and smooth data integration. The heart of the job is to craft a formal ontology that the model can ingest and reason over, then fine-tune or build an LLM so it can surface precise answers and merge heterogeneous data sources effortlessly. The work spans three intertwined areas. First comes the ontology itself: classes, properties, and axioms captured in OWL/RDF (Protégé or an equivalent editor is fine) so the structure is machine-readable and logically sound. Next, that ontology needs to be embedded in—or tightly coupled with—an LLM, ideally via a knowledge-enhanced training pipeline using Python, Hugging Face Transformers, and whichever deep-learning stack (PyTorch, TensorFlow) you prefer. Finally, I’ll need an interface layer—RESTful or GraphQL—that lets external systems query the model for both ad-hoc information retrieval and automated data-integration workflows, with SPARQL or similar endpoints exposed where useful. Deliverables • Clean, validated ontology file (OWL/RDF) • Fine-tuned or custom-built LLM weights plus training scripts • API or microservice that accepts natural-language or structured queries and returns ontology-aligned answers/merged data • Short technical document explaining setup, dependencies, and example queries Acceptance criteria • Ontology passes consistency checks in Protégé with zero unsatisfied classes. • Retrieval accuracy on a provided test set ≥ 90 %. • Data-integration demo shows two disparate sources mapped correctly into the shared schema. • All code reproducible with a single requirements.txt or environment.yml. If this aligns with your expertise in semantic AI, let’s move forward—clarity of reasoning and clean code matter more than raw parameter count.