Show HN: Klarity – Open-source tool to analyze uncertainty/entropy in LLM output

We've open-sourced Klarity - a tool for analyzing uncertainty and decision-making in LLM token generation. It provides structured insights into how models choose tokens and where they show uncertainty.

What Klarity does:

- Real-time analysis of model uncertainty during generation - Dual analysis combining log probabilities and semantic understanding - Structured JSON output with actionable insights - Fully self-hostable with customizable analysis models

The tool works by analyzing each step of text generation and returns a structured JSON:

- uncertainty_points: array of {step, entropy, options[], type} - high_confidence: array of {step, probability, token, context} - risk_areas: array of {type, steps[], motivation} - suggestions: array of {issue, improvement}

Currently supports hugging face transformers (more frameworks coming), we tested extensively with Qwen2.5 (0.5B-7B) models, but should work with most HF LLMs.

Installation is simple: `pip install git+https://github.com/klara-research/klarity.git`

We are building OS interpretability/explainability tools to visualize & analyse attention maps, saliency maps etc. and we want to understand your pain points with LLM behaviors. What insights would actually help you debug these black box systems?

Links:

- Repo: https://github.com/klara-research/klarity - Our website: [https://klaralabs.com](https://klaralabs.com/)


Comments URL: https://news.ycombinator.com/item?id=42918237

Points: 7

# Comments: 0

https://github.com/klara-research/klarity

Created 6h | Feb 3, 2025, 2:40:05 PM


Login to add comment