Ben chats with Gias Uddin, an assistant professor at York University in Toronto, where he teaches software engineering, data science, and machine learning. His research focuses on designing intelligent tools for testing, debugging, and summarizing software and AI systems. He recently published a paper about detecting errors in code generated by LLMs. Gias and Ben discuss the concept of hallucinations in AI-generated code, the need for tools to detect and correct those hallucinations, and the potential for AI-powered tools to generate QA tests. https://stackoverflow.blog/2024/09/20/detecting-errors-in-ai-generated-code/
Autentifică-te pentru a adăuga comentarii
Alte posturi din acest grup
Single individuals make less of a difference to the success or failure of a technology project than you might think (and that’s a good thing). https://stackoverflow.blog/2024/12/25/the-real-10x-devel
A developer’s journal is a place to define the problem you’re solving and record what you tried and what worked. https://stackoverflow.blog/2024/12/24/you-should-keep-a-developer-s-journal/
During the holidays, we’re releasing some highlights from a year full of conversations with developers and technologists. Enjoy! We’ll see you in 2025. https://stackoverflow.blog/2024/12/24/how-develo
Computer science deals with concurrency, but what about simultaneity? https://stackoverflow.blog/2024/12/23/can-a-programming-language-implement-time-travel/
Would updating a tool few think about make a diff(erence)? https://stackoverflow.blog/2024/12/20/this-developer-tool-is-40-years-old-can-it-be-improved/
Ben and Ryan catch up with Nenne Adaora “Adora” Nwodo, who recently joined Stack Overflow as a platform engineering manager. From her childhood fascination with computers to her years as a software en
It's time to delegate to the robots. https://stackoverflow.blog/2024/12/19/developers-hate-documentation-ai-generated-toil-work/