LLM-as-a-Judge: automated evaluation of search query parsing using large language models
IntroductionThe adoption of Large Language Models (LLMs) in search systems necessitates new evaluation methodologies beyond traditional rule-based or manual approaches.MethodsWe propose a general framework for evaluating structured outputs using LLMs, focusing on search query parsing within an onlin...
Saved in:
Main Authors: | Mehmet Selman Baysan, Serkan Uysal, İrem İşlek, Çağla Çığ Karaman, Tunga Güngör |
---|---|
Format: | Article |
Language: | English |
Published: |
Frontiers Media S.A.
2025-07-01
|
Series: | Frontiers in Big Data |
Subjects: | |
Online Access: | https://www.frontiersin.org/articles/10.3389/fdata.2025.1611389/full |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Similar Items
-
Prompt Engineering for evaluators: optimizing LLMs to judge linguistic proficiency
by: Lorenzo Gregori
Published: (2025-07-01) -
A Drop-in Replacement for LR(1) Table-Driven Parsing
by: Michael Oudshoorn
Published: (2021-12-01) -
Predicting the outflow of household deposits based on the intensity of search queries
by: I. N. Gurov, et al.
Published: (2023-07-01) -
Natural language parsing psychological, computational, and theoretical perspectives
Published: (1984) -
Optimizing encrypted search in the cloud using autoencoder-based query approximation
by: Mahmoud Mohamed, et al.
Published: (2024-12-01)