Ben chats with Gias Uddin, an assistant professor at York University in Toronto, where he teaches software engineering, data science, and machine learning. His research focuses on designing intelligent tools for testing, debugging, and summarizing software and AI systems. He recently published a paper about detecting errors in code generated by LLMs. Gias and Ben discuss the concept of hallucinations in AI-generated code, the need for tools to detect and correct those hallucinations, and the potential for AI-powered tools to generate QA tests. https://stackoverflow.blog/2024/09/20/detecting-errors-in-ai-generated-code/
Accedi per aggiungere un commento
Altri post in questo gruppo

Today’s episode is a roundup of spontaneous, on-the-ground conversations from HumanX 2025, featuring guests from CodeConductor, DDN, Cloudflare, and Galileo. https://stackoverflow.blog/2025/04/25/grab

In this episode of Leaders of Code, we chat with guests from Lloyds Banking Group about their focus on engineering excellence and the need for organizations to adapt to new technologies while ensuring

An update on recent launches and the upcoming roadmap https://stackoverflow.blog/2025/04/23/community-products-roadmap-update-april-2025/

Ryan chats with Dataiku CEO and cofounder Florian Douetteau about the complexities of the genAI data stack and how his company is orchestrating it. https://stackoverflow.blog/2025/04/22/visually-orch

On today’s episode, Ben and Ryan chat with Laly Bar-Ilan, Chief Scientist at Bit. https://stackoverflow.blog/2025/04/18/generating-components-not-tokens/

Is “agentic AI” just a buzzword, or is it the sea change it seems? https://stackoverflow.blog/2025/04/17/wait-what-is-agentic-ai/

Kyle chats with Jesse Tomchak a software engineer at ClickUp about all the spicy backend takes they could find. https://stackoverflow.blog/2025/04/09/wbit-6-be-curious-ask-questions-and-don-t-argue-w