DiscoverDeep PapersBreaking Down EvalGen: Who Validates the Validators?
Breaking Down EvalGen: Who Validates the Validators?

Breaking Down EvalGen: Who Validates the Validators?

Update: 2024-05-13
Share

Description

Due to the cumbersome nature of human evaluation and limitations of code-based evaluation, Large Language Models (LLMs) are increasingly being used to assist humans in evaluating LLM outputs. Yet LLM-generated evaluators often inherit the problems of the LLMs they evaluate, requiring further human validation.

This week’s paper explores EvalGen, a mixed-initative approach to aligning LLM-generated evaluation functions with human preferences. EvalGen assists users in developing both criteria acceptable LLM outputs and developing functions to check these standards, ensuring evaluations reflect the users’ own grading standards.

Read it on the blog: https://arize.com/blog/breaking-down-evalgen-who-validates-the-validators/

To learn more about ML observability, join the Arize AI Slack community or get the latest on our LinkedIn and Twitter.

Comments 
In Channel
Anthropic Claude 3

Anthropic Claude 3

2024-03-2543:01

RAG vs Fine-Tuning

RAG vs Fine-Tuning

2024-02-0839:49

Phi-2 Model

Phi-2 Model

2024-02-0244:29

loading
00:00
00:00
x

0.5x

0.8x

1.0x

1.25x

1.5x

2.0x

3.0x

Sleep Timer

Off

End of Episode

5 Minutes

10 Minutes

15 Minutes

30 Minutes

45 Minutes

60 Minutes

120 Minutes

Breaking Down EvalGen: Who Validates the Validators?

Breaking Down EvalGen: Who Validates the Validators?

Arize AI