Carnegie Mellon University

A woman writes on a dry erase board while two men look on.

March 04, 2026

The Hidden Cost of AI Speed

New SCS Research Examines AI Tool Adoption in Software Development

By Marylee Williams

While AI coding tools can increase the speed of code development, research from Carnegie Mellon University's School of Computer Science shows that this speed comes with a price: quality.

Users of these tools need to be judicious about when and how they deploy them because of their potential impact on code quality, researchers noted.

"What ends up happening is the creators of these tools market them with headlines saying things like, 'These AI agents can write 10 times the amount of code that your developer used to write,'" said Shyam Agarwal, a Ph.D. student in CMU's Software and Societal Systems Department (S3D). "But writing code has never been the hard part. It is all about writing good code. Our research asks if you are focusing on speed at the cost of the quality of your work."

Researchers said this is one of the first projects studying how AI coding tools — both autonomous agents that act proactively inside projects and AI tools that assist developers — affect quality.

"This is maybe the first study to look at the adoption of these AI tools in the wild," said Bogdan Vasilescu, an S3D associate professor. "There have been a few small studies that are well-designed with randomized control, but those have necessarily been small-scale, with small tasks and undertaken over a relatively short time. Our work is in the wild — people using these tools on real-world projects — or over a large sample of open-source projects."

In one study, the CMU researchers focused on the use of the popular AI coding assistant Cursor. The study compared 806 GitHub projects that used Cursor with a matched control group that didn't use the AI tool. Researchers examined the speed of code written for a project and how maintainable and bug-free the code was. Another study compared autonomous or agentic coding tools — tools that can complete tasks without continuous human guidance —with projects that used AI tools requiring human assistance.

In both studies, researchers found that adopting AI tools led to an initial increase in lines of code added to a project, but those gains didn't last because of later issues with code quality.

For example, developers who adopted Cursor saw, on average, a 281% increase in lines of code added in the first month and a 48% increase in the second month. Development speed on projects that used Cursor, however, ultimately decreased because the codebase was more complex and contained quality issues, such as logic errors or security holes. The increase in speed that these AI tools offer developers isn't sustainable because of the code quality issues they cause down the road, researchers said.

At the same time, the researchers made it clear that they don't see the use of these AI tools as bad. In fact, they used AI tools to do this research and think these tools can be helpful.

"Maybe people are not using these AI tools correctly or the model isn't strong enough," said Hao He, a Ph.D. student in S3D. "From this study's observations, what we can see is that these AI tools are not delivering on their promises yet. And we hope that the designers of these AI tools will see our work and put more effort into quality assurance."

Along with He, Agarwal and Vasilescu, the S3D research team included Ph.D. student Courtney Miller and Associate Professor Christian Kästner. A National Science Foundation award supported portions of this research.