Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output validation views #208

Open
shreyashankar opened this issue Nov 21, 2024 · 0 comments
Open

Output validation views #208

shreyashankar opened this issue Nov 21, 2024 · 0 comments
Labels
enhancement New feature or request

Comments

@shreyashankar
Copy link
Collaborator

shreyashankar commented Nov 21, 2024

One of the biggest pain points for users is to assess "correctness" of operations, at a high level. It does not need to be a number or accuracy fraction; that would just cause more questions (a la who validates the validators).

Our goal is to have the user sweep as much of the data/outputs as possible, and form a quick coarse assessment of accuracy.

In this vein, some of the following views may be interesting:

  • Compare two (very different) documents and the LLM outputs, side by side. Here, it is hard to know what "very different" is---it is task specific; not necessarily two documents that have the smallest inner product in embedding space when embedding the entire documents. "Different" could mean structurally different documents, differences in a particular section of documents, etc.
  • A visualization of a column/document attribute (e.g., histogram, bar chart). Discussed offline in research meeting 11/20.
@shreyashankar shreyashankar added the enhancement New feature or request label Nov 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant