Science, 2026-01-22/2026-02-19, ¡°Who is using AI to code? Global diffusion and impact of generative AI¡±, Simone Daniotti; Johannes Wachs; Xiangnan Feng; Frank Neffke
AI Does Not Help Everyone Equally
Generative AI appears to be a tool open to everyone, but its actual effects are not the same for all people.The latest research shows that even if beginners use it more often, the greater benefits may still go to experienced users. The real gap in the AI era is likely to emerge not from access itself, but from the ability to judge and verify.
AI Has Already Moved Beyond Experiment and Become the Workplace
On January 22, 2026, a study was published in the international academic journal *Science* that tracked on a large scale how deeply generative AI had entered the real world of coding. The research was led by a team consisting of Simone Daniotti, Johannes Wachs, Xiangnan Feng, Frank Neffke, and others. They analyzed the vast record of Python code contributions uploaded to GitHub from 2018 to 2024 and applied a classification model to track the activities of roughly 160,000 to 200,000 developers, identifying which code had been written with the help of AI. The question behind the study was simple. How rapidly is generative AI spreading, and to whom do its effects return more strongly?
The results were far more striking than expected. By the end of 2024 in the United States, about 30% of newly written Python functions were estimated to have received AI assistance. This meant that generative AI was no longer merely an experimental technology used by a handful of early adopters, but had become a tool deeply embedded in actual production settings. The more important point was that this change had taken place in a very short period of time. Only a few years ago, AI coding tools were regarded as little more than ¡°interesting assistive features,¡± but they have now become elements that account for a meaningful share of the real software production process. In terms of the speed of adoption alone, it would not be an exaggeration to say that generative AI has entered the workplace faster than almost any recent productivity tool.
This scene is symbolic. Technology always begins by looking like a choice for a small minority, and then at some point it turns into infrastructure. Email, search engines, smartphones, and messaging apps all followed that path. At first, they looked like optional tools that one could use or not use, but at a certain point they became environments in which it was difficult to work without them. Generative AI is walking a similar road. Today it may look like a choice, but tomorrow it may become a basic assumption. What matters now is not whether one uses AI or not, but who gains greater benefits and who falls relatively behind in a world of work where AI has already entered.
On the surface, this technology seems as though it would favor beginners more. They can ask about syntax they do not know, receive code drafts quickly, and begin tasks in seconds that in the past would have required long periods of searching and trial and error. In fact, the study also found that less experienced developers used AI more often. Seen only from this angle, generative AI appears to be a tool that lowers technical barriers and opens new opportunities for those with less experience. Many people indeed expect exactly that. The belief that ¡°now even beginners can work almost like experts if they use AI well¡± comes from this point.
But this is precisely where the study shows a direction different from what many expected. Although the share of AI usage was higher among beginners, the tangible gains in productivity, qualitative expansion of work, and exploration of new tools appeared more clearly among experienced developers. In other words, those who used it the most were not the same as those who benefited from it the most. Beginners called on AI frequently, but experienced users employed AI more strategically. Generative AI was, in effect, a tool that gave greater force not to those who merely used it more, but to those who interpreted it better and positioned it more appropriately.
Beginners Use It More Often, but Experts Gain More from It
This point is extremely important for understanding the AI era. Popular imagination surrounding generative AI contains the expectation that ¡°everyone will be able to work like an expert.¡± But reality may be much more subtle. AI is a machine that produces answers, but it does not take over the task of judging whether those answers are right or wrong, whether they are appropriate to the present situation, where vulnerabilities are hidden, or what responsibilities follow from them. This is especially true in work like coding, where errors do not always reveal themselves immediately on the surface. Code that looks fine at first glance may still contain security problems, may be unfavorable for long-term maintenance, or may cause fatal errors only in particular situations.
Experienced developers can look at a draft produced by AI and identify what is missing, detect dangerous structures, and understand why a certain library was chosen and whether that approach fits the present context. Beginners, by contrast, may receive an answer that looks plausible but lack the standards by which to evaluate it. For that reason, AI often appears to beginners as ¡°a tool that instantly gives impressive answers,¡± while for experienced users it becomes ¡°a tool that saves time while allowing them to design work on a broader scale.¡± This difference is not small. The former may remain at the level of receiving answers, while the latter moves on to reviewing, revising, and integrating those answers into larger outcomes.
In this sense, generative AI may function less as a tool that automatically levels ability and more as a tool that adds extra leverage on top of ability that already exists. Just as giving the same hammer to a skilled carpenter and to a novice will not produce the same result, using the same AI will not necessarily produce the same quality of work. The tool is open to everyone, but the hands and eyes that handle the tool are not the same for everyone. That is why the democratization of technology does not automatically mean the democratization of capability.
If we think further about it, the value of AI increases when it is combined not with the ¡°quantity of output¡± but with the ¡°quality of judgment.¡± Experienced users know what to delegate to AI and what to adjust by hand. They have a feel for which prompts lead to good results, and which responses may look convincing but should not be trusted. Beginners, by contrast, are more likely either to trust AI¡¯s output too easily or to use it too simply because they do not know how to take fuller advantage of it. In the end, what creates the difference is not AI itself, but interpretive power and contextual sense in handling AI. Even if AI is given equally to everyone as a tool, that does not mean the results become equal as well.
What Is Happening in Coding Will Soon Become a Problem for the Entire White-Collar World
This is also why the study does not end as a story confined to the software industry. What is happening in coding now is likely to spread across white-collar work more broadly. The same questions can arise anywhere generative AI enters—report writing, data analysis, planning documents, marketing copy, translation, legal drafts, customer response, and more. The ability to create a draft quickly may matter less than the ability to evaluate that draft, revise it, and reposition it in a way that fits the context.
Take report writing, for example. A beginner can use AI to produce an outline and rapidly expand sentences. But whether those sentences fit the context of the organization, whether they miss important issues, and whether the numbers and logic are consistent are matters that ultimately still require human judgment. The same is true of marketing copy. AI can instantly produce plausible slogans and lines, but preserving the tone of a brand, reading the psychology of the target audience, and avoiding risk require a much higher level of sensibility. Similar things happen with legal drafts, financial analysis, educational content, and customer response. AI creates drafts quickly, but turning them into outputs one can stand behind still requires a higher level of judgment.
As a result, beginners may find it easier to start using tools at the entrance to work, but experienced users may gain far larger productivity advantages. In this case, AI may end up functioning less as a ladder that lifts everyone at once and more as a longer ladder handed to those who were already standing higher up. This is why optimism that technology will reduce inequality does not automatically become reality. On the contrary, those who already know more may become faster and take on broader work, while those who know less may start more quickly but fail to grow in depth.
This change also affects the power structure inside organizations. In the past, a certain volume of repetitive work required a corresponding number of workers, but now the same number of people can process more with AI. This may lead companies to value those who review results and set direction more than those who merely execute repetitive tasks. In other words, the value of ¡°people with good judgment¡± may rise more than that of ¡°people who work quickly with their hands.¡± Generative AI is not simply a tool that increases productivity; it may also change which kinds of people organizations regard as more important.
The Problem May Be Not Jobs but the Ladder of Growth
At the level of the labor market, an even more sensitive problem emerges. In many fields, people begin with the simplest tasks and gradually learn more complex forms of judgment as they become skilled. Junior developers build intuition through repetitive code revision and debugging, and entry-level office workers learn the feel of work by repeatedly organizing documents, gathering materials, and drafting initial versions. This process may seem tedious and inefficient, but it is in fact a very important stage in the formation of expertise. People learn ¡°why it is done this way¡± through repetition of small tasks, and on top of that they build larger judgment.
But if generative AI begins to replace a large share of this entry-level work, the question remains: where will people gain the experience they need in order to move up into intermediate skill levels? Right now AI is likely to make the skilled even stronger, while at the same time weakening the training path that leads to skill. This structure can improve efficiency in the short term, but in the long term it may destabilize the ladder through which talent develops. Almost no one begins with fully formed judgment. Most people grow by doing simple tasks, learning through mistakes, and gradually developing instinct. But if those entry-level tasks begin to disappear, society will face the larger question of how it is supposed to cultivate the skilled workers of the future.
This is also a burden for companies. In the short term, reducing entry-level hiring and increasing the productivity of skilled workers with AI may look efficient. But if, a few years later, there are not enough new middle managers, senior practitioners, and skilled experts, that efficiency may return as a future cost. The same is true for schools. It is not enough simply to set rules saying ¡°AI may be used¡± or ¡°AI must not be used.¡± What matters more is how students and new workers will learn to question, verify, and revise AI¡¯s answers. In the end, what companies and society need to think about is not whether to adopt AI. What may matter more is how to redesign the very process through which entry-level people learn judgment and responsibility.
At the same time, there is also a positive possibility. If AI reduces simple repetitive work, people may be able to move more quickly into areas of more complex judgment. The problem is that this does not happen automatically. There must also be structures in place in which someone teaches, reviews, gives feedback, and allows people to learn through failure. Otherwise, AI may become not a tool that helps learning, but a cover that conceals the empty spaces where learning should have occurred. On the surface, everything may look faster, but underneath, there may be more cases in which skill remains hollow.
The Accuracy of the Eye Matters More Than the Speed of the Hand
That is why, in the AI era, competitiveness is likely to be decided more by the accuracy of the eye than by the speed of the hand. In the past, what mattered was how quickly one could write and how much one could process. Now what matters more may be the ability to decide what to delegate and what must still be judged directly. In other words, the value of review over input, discrimination over production, and responsibility over speed is rising. Generative AI is not eliminating labor so much as shifting its center of gravity. People are gradually changing from ¡°those who make things directly¡± into ¡°those who evaluate and adjust results.¡±
This change is larger than it first appears. In the past, many professions revolved around ¡°how well can you produce something.¡± In the future, ¡°how well can you select¡± may become more important. The ability to draft directly will still matter, but the ability to decide which of several drafts is most appropriate, which parts should be discarded and which should be preserved, and where risks may emerge will create the greater difference. In the end, the human role shifts not only toward creation, but toward editing, discrimination, and responsibility.
The problem is that the abilities required for this transition are still not being trained widely enough in schools and workplaces. We still tend to evaluate speed in producing correct answers, processing large quantities, summarizing, and organizing. But if AI can already replace a substantial portion of that territory quickly, then the value that remains for humans moves elsewhere. The power to interpret, the power to doubt, the power to read context, and the power to make decisions one can take responsibility for may become more important. These abilities do not arise overnight. They require experience, mistakes, feedback, and training.
If the democratization of tools does not automatically mean the democratization of ability, then the gap of the future is likely to emerge more from interpretive power than from access. Even if everyone can use the same AI, not everyone will obtain the same result. Some people will go much farther through AI, while others may find that AI made it easier to begin but did not allow them to grow in depth. That is why the real question raised by generative AI is not ¡°who uses it more.¡± It is closer to ¡°who can read better, doubt better, and take responsibility better.¡± An era may already have begun in which people use the same tool, yet some climb much faster while others remain in place.
Reference
Science, 2026-01-22/2026-02-19, ¡°Who is using AI to code? Global diffusion and impact of generative AI¡±, Simone Daniotti; Johannes Wachs; Xiangnan Feng; Frank Neffke