ºÎ»ê½Ãû µµ¼­¿ä¾à
   ¹Ìµð¾î ºê¸®Çνº³»¼­Àç´ã±â 

åǥÁö





  •  

    Increase in Completed Work Driven by AI Coding Assistants
    - A change confirmed not by adopting a tool, but by randomized experiments

    AI coding assistants are said to make developers faster. But what truly matters is not the feeling of speed, but whether more work actually gets finished. Three randomized field experiments provided a numerical answer to that question.

    Asking whether more work gets finished, not whether it feels faster
    Coding assistants are a technology that provokes sharply divided reactions. People who have used them say they definitely became faster, while those who have not worry that the code may become sloppy. But one point matters here. A feeling of speed alone is not enough to make a judgment. Writing code faster and actually finishing more work are not always the same. Software development does not run on individual speed alone. There are meetings, reviews, tests, deployments, and operations. Even if one person becomes faster, the overall pace can remain unchanged if the next stage is blocked.

    So the question shifts to this. If a tool is given, does the amount of work that actually gets finished increase. And for whom does the effect appear most strongly. This question is tricky for a simple reason. The people who start using a tool first may already learn quickly, and the teams that adopt first may already be well organized. Then it becomes difficult to separate what comes from the tool from what comes from people and teams that were already strong.

    What random assignment does
    The cleanest method here is a randomized experiment. People are divided into two groups, and only one side receives access to the coding assistant first, while the other continues working in the usual way. This makes tool use the result of assignment rather than personal preference. It also allows comparison of what differences emerge while both groups work within the same company, in the same period, on similar tasks.

    This approach is valuable for a simple reason. It makes it possible to confirm what changes occur in real work flows rather than relying on impressions that a tool seems good. It enables statements not like it felt faster, but like more work got finished.

    A study combining field experiments across three companies
    A study analyzed by researchers from Princeton University, the MIT Sloan School of Management, the University of Pennsylvania, and Microsoft Research combined field randomized experiments conducted at three companies. The subjects were Microsoft, Accenture, and an anonymous Fortune 100 electronics manufacturing company.

    The common structure was very simple. Some developers were granted access to a coding assistant first, while others continued working with the existing approach. After comparing outcomes for a set period, access to the tool was opened to all groups. This process reduced the room for factors such as who originally liked the tool to influence the results.

    Operational details differed somewhat by company. Microsoft and Accenture clearly separated the two groups for a defined period, while the anonymous company used a design that varied the timing of adoption by team so that comparison was possible. The researchers tracked how weekly completed work volume changed using work management systems and task records.

    What the average 26 percent increase in completed work means
    When data from the three companies were pooled and analyzed across 4,867 people, the group using the coding assistant showed an average increase of about 26 percent in the number of completed tasks. Because the nature of work differed by company, the size of the increase was not identical, but overall the direction that more work was getting finished was confirmed.

    Another notable point is who improved more. Developers with shorter tenure or lower skill tended to use the tool more often, and they also tended to show larger gains. Put simply, the people this tool helps first may be not those who are already fast, but those who are in a stage where trial and error is frequent.

    How to avoid raising speed while losing quality
    Once results show that more work gets finished, the next concern follows. Is quality still okay. Do bugs increase, does review become harder, or does rework rise because tasks were finished faster.

    One point matters here. A tool may raise speed, but quality must be protected by how the organization works. If review standards are loose and testing is weak, faster speed can return later as a much larger cost. On the other hand, if coding conventions are clear, test automation is strong, and review culture is solid, the chances increase that a speed boost will not spill into quality decline.

    That is why the order of rollout matters. A good starting point is junior developers and newly staffed roles. It can reduce the learning curve and stabilize overall team velocity. At the same time, it is safer to improve review standards, test automation, and coding conventions together. If only the tool is introduced and everything else is left unchanged, the team may simply become busier in proportion to the increased speed.

    Adoption rates also need to be managed. Even over time, not everyone may use the tool, and the reason is often less resistance than a mismatch between habits and the existing work flow. Training is also more effective when it focuses less on feature explanations and more on helping the team agree on where to use the tool and where not to use it.

    The conclusion is simple. AI coding assistants can make developers faster, but they deliver their biggest effect when they also push the team¡¯s way of working to become more organized. The 26 percent shown by randomized experiments is not hopeful intuition. It is a signal that the amount of finished work can actually increase. And whether that signal becomes a good outcome is determined not by the tool, but by operations.

    Reference
    Cui, Zheyuan Kevin; Demirer, Mert; Jaffe, Sonia; Musolff, Leon; Peng, Sida; Salz, Tobias. (2025). The Effects of Generative AI on High-Skilled Work: Evidence from Three Field Experiments with Software Developers. Working Paper.