Gentrace makes it easier for businesses to test AI-powered software

As businesses continue to integrate generative AI into their products, many find it challenging to actually test whether the AI is behaving correctly and giving useful answers.

To help address this problem, a startup called Gentrace offers an integrated platform for testing software built around large language models. Whereas traditional software that can be subjected to automated tests to verify that, say, data submitted to a web form ended up properly formatted in a database, AI-powered software often can’t be expected to behave exactly in a specified way in response to input, says Gentrace cofounder and CEO Doug Safreno.

Customers can end up defining a set of test data for the AI after any changes to the AI model, the databases it interacts with, or other parameters. But without a testing platform, running those tests can mean maintaining spreadsheets of AI test prompts and manually logging that they give satisfactory results. And while automation is possible, verifying that an AI response contains certain keywords or even asking another AI system to confirm that an AI looks satisfactory, complex testing often requires engineers to be heavily involved, even if other team members like product managers might know better what good output looks like, Safreno says.

“The problem becomes, nobody can look at it and collaborate on these tests and on these evaluation methods,” he says. “As new product requirements come in, they’re not being captured in the testing.”

To help make AI testing more accessible, Gentrace’s platform enables anyone within a company to see, edit, and run tests for LLM-powered systems. The results can then be graded by human evaluators, simple programs, or even more LLMs. Gentrace provides guidance on using LLMs efficiently to test AI output, which Safreno says often involves giving the testing LLMs an “unfair advantage”—providing them more detail of the desired output than the original prompt. But the tool also provides an interface for prompting human evaluators to consider an AI response.

[Image: Gentrace]

Anna Wang, head of AI at AI-powered workforce training company Multiverse, says Gentrace’s system eliminated the need to pass around documents of AI input and output to evaluate the system’s performance. 

“What this replaced were tons and tons of spreadsheets,” she says. “Gentrace has this slick UI that plugs straight into our AI code.”

And as of Tuesday, Gentrace is offering a new feature called Experiments that gives users even more power to test entire applications from the Gentrace interface. With Experiments, users can specify parameters for a test run like data sets to access, prompts to AI systems, and database configuration settings. With simple initial tweaks to their code, developers can mark particular variables as editable within Gentrace, and teammates with no coding knowledge can then set them as desired for a particular test run. Test reports within Gentrace log what’s already been tried in prior tests and how the software performed. 

“We just wrap, end-to-end, your application, no matter how you’ve architected it, which means we can measure the impact of any change,” says Safreno. “You could have 20 models chained together, generating an output, and you could tweak one prompt along the way, and we could measure the impact of that.”

The company also on Tuesday announced an $8 million Series A funding round led by Matrix Partners, with additional participation from Headline and K9 Ventures. The new investment will fund additional product development, which Safreno says may one day enable AI—as well as humans—to design tests for LLM-powered applications, like searching through potential prompts or other settings to find the best performing options for an app, or generating new test cases to evaluate performance.  

Future versions of Gentrace Experiments will likely also include the ability to experiment with different potential settings, then directly deploy the best-performing options to live code. But even the current version is likely to make AI development more efficient, Safreno says, by reducing the amount of engineer time and coordination required to run basic tests.

“It’s taking out this enormous loop between multiple stakeholders that just doesn’t need to exist,” he says.

https://www.fastcompany.com/91243257/gentrace-makes-it-easier-for-businesses-to-test-ai-powered-software?partner=rss&utm_source=rss&utm_medium=feed&utm_campaign=rss+fastcompany&utm_content=rss

Utworzony 3mo | 10 gru 2024, 19:40:08


Zaloguj się, aby dodać komentarz

Inne posty w tej grupie

How ‘lore’ became the internet’s favorite way to overshare

Lore isn’t just for games like The Elder Scrolls or films like The Lord of the Rings—online, it has evolved into something entirely new.

The Old English word made the s

24 lut 2025, 13:20:04 | Fast company - tech
These LinkedIn comedians are leaning into the cringe for clout

Ben Sweeny, the salesman-turned-comedian behind that online persona Corporate Sween, says that bosses should waterboard their employees. 

“Some companies drown their employees with

24 lut 2025, 10:50:08 | Fast company - tech
The best apps to find new books

This article is republished with permission from Wonder Tools, a newsletter that helps you discover the most useful sites and apps. 

24 lut 2025, 06:20:05 | Fast company - tech
5 tips for mastering virtual communication

Andrew Brodsky is a management professor at McCombs School of Business at the University of Texas at Austin. He is also CEO of Ping Group and has received nume

23 lut 2025, 11:50:03 | Fast company - tech
Apple’s hidden white noise feature may be just the productivity boost you need

As I write this, the most pleasing sound is washing over me—gentle waves ebbing and flowing onto the shore. Sadly, I’m not actually on some magnificent tropical beach. Instead, the sounds of the s

22 lut 2025, 12:40:06 | Fast company - tech
The next wave of AI is here: Autonomous AI agents are amazing—and scary

The relentless hype around AI makes it difficult to separate the signal from the

22 lut 2025, 12:40:05 | Fast company - tech
This slick new service puts ChatGPT, Perplexity, and Wikipedia on the map

I don’t know about you, but I tend to think about my favorite tech tools as being split into two separate saucepans: the “classic” apps we’ve known and relied on for ages and then the newer “AI” a

22 lut 2025, 12:40:03 | Fast company - tech