For teams actively using AI coding assistants (Copilot, Cursor, Windsurf, etc.), I'm noticing a frustrating pattern: the more complex your codebase, the more time developers spend preventing AI from breaking things.
Some common scenarios I keep running into:
* AI suggests code that completely ignores existing patterns
* It recreates components we already have
* It modifies core architecture without understanding implications
* It forgets critical context from previous conversations
* It needs constant reminders about our tech stack decisions
For engineering teams using AI tools:
1. How often do you catch AI trying to make breaking changes?
2. How much time do you spend reviewing/correcting AI suggestions?
3. What workflows have you developed to prevent AI from going off track?
Particularly interested in experiences from teams with mature codebases.
Comments URL: https://news.ycombinator.com/item?id=42701745
Points: 16
# Comments: 14
Login to add comment
Other posts in this group
Article URL: https://challenge.dyalog.com/
Comments URL: https://news.ycombinator.com/ite
Article URL: https://infosecforactivists.org
Comments URL: https://news.ycombinator.com
Article URL: https://linusakesson.net/scene/nine/index.php
Article URL: https://www.n
Article URL: https://www.inkandswitch.com/ambsheets/
Comments URL: https://news
Article URL: https://beej.us/guide/bggit/
Comments URL: https://news.ycombinator.com/item?