Domain-Specific vs. General-Purpose Large Language Models in Orthodontics: A Blinded Comparison of AlimGPT, GPT-4o, Gemini, and Llama


Creative Commons License

Aksakalli S., Giray B., Temel C.

DENTISTRY JOURNAL, cilt.14, sa.4, ss.219, 2026 (ESCI, Scopus)

  • Yayın Türü: Makale / Tam Makale
  • Cilt numarası: 14 Sayı: 4
  • Basım Tarihi: 2026
  • Doi Numarası: 10.3390/dj14040219
  • Dergi Adı: DENTISTRY JOURNAL
  • Derginin Tarandığı İndeksler: Scopus, Emerging Sources Citation Index (ESCI), Directory of Open Access Journals
  • Sayfa Sayıları: ss.219
  • Açık Arşiv Koleksiyonu: AVESİS Açık Erişim Koleksiyonu
  • İstanbul Gelişim Üniversitesi Adresli: Evet

Özet

Objective: The application of artificial intelligence (AI) in orthodontics has evolved rapidly in recent years, encompassing areas such as diagnosis, treatment planning, and patient management, and AlimGPT is an AI-based tool that provides treatment options based on data and algorithms. Methods: Fourteen different orthodontic questions were asked to each model, and answers were analyzed. This study aimed to compare AlimGPT with GPT-4o, Gemini, and Llama using standardized tests to evaluate the quality of information provided, including the Likert scale, modified DISCERN (mDISCERN), and modified Global Quality Score (mGQS). Results: Significant differences were detected for reliability (χ2 = 15.267, p = 0.0016) and usefulness (χ2 = 20.557, p = 0.0001). Post hoc tests showed AlimGPT > Gemini and Llama for reliability and AlimGPT > GPT-4o, Gemini, and Llama for usefulness. mDISCERN was significant overall (χ2 = 11.047, p = 0.0115), but no pairwise contrast met adjusted significance; mGQS showed no significant differences (χ2 = 7.071, p = 0.0697). Inter-rater agreement was moderate-to-good for reliability (ICC = 0.710, 95% CI 0.60–0.80) and usefulness (ICC = 0.729, 95% CI 0.63–0.82), moderate for mGQS (ICC = 0.596, 95% CI 0.47–0.71), and poor-to-moderate for mDISCERN (ICC = 0.435, 95% CI 0.30–0.58). Conclusions: In this blinded, within-subjects experiment, the domain-specific model (AlimGPT) received higher clinician ratings for usefulness and, for reliability, exceeded two general baselines. Differences in mGQS were not detected. Expanding the number of raters, increasing item diversity or integrating updated baselines would be beneficial.