ºÎ»ê½Ãû µµ¼­¿ä¾à
±¹³»µµ¼­ ¿ä¾à
±¹³»µµ¼­ ÇÁ¸®ºä ÇØ¿Üµµ¼­ ÇÁ¸®ºä
±Û·Î¹ú Æ®·»µå ¹Ìµð¾î ºê¸®Çνº
   ¹Ìµð¾î ºê¸®Çνº 

åǥÁö





  • Why Is AI Always Being Introduced, Yet Performance Arrives So Slowly?


    [Key Message]
    * The biggest obstacle blocking corporate AI performance is not the technology itself. The more serious problem is that executives and middle managers are looking at the same AI but seeing completely different realities.

    * Executives tend to view AI as an opportunity for growth, innovation, and competitive advantage. Middle managers, by contrast, feel the burden of execution first, including increased review work, unclear responsibility, and confusion on the ground.

    * AI does not function as a magic tool that reduces work from the very beginning. In many cases, especially in the early stage of adoption, it can actually create more labor inside the organization through review, coordination, training, and risk management.

    * That is why the success or failure of AI does not depend simply on whether it has been adopted. Real performance emerges only when companies redesign how it is applied to work, who makes judgments, and where responsibility begins and ends.

    * In the future, corporate competitiveness is unlikely to be determined only by who buys better AI first. The real dividing line will be how quickly a company can reduce internal time lags and temperature gaps and turn them into a single execution capability.

    ***


    The organizational cost created by the time lag between executives and middle managers
    A Harvard Business Review article published on April 8, 2026, precisely identifies a problem that many companies are already feeling but have not been able to explain clearly. Senior executives and middle managers within the same company are talking about the same AI in completely different ways, and that gap is not merely a matter of differing opinions but is spreading into real costs. At the top, AI is received as a means of growth, a key to competitive advantage, and a symbol of future readiness. In the middle, however, it is often experienced as unfinished work, increased responsibility, unclear standards, and the burden that will return to them if something goes wrong. As a result, some companies call AI a company-wide strategy while in practice repeating the confusion of the pilot stage, and some organizations announce its adoption in grand terms while, on the ground, the speed and quality of work actually become unstable. The problem is not that the technology is lacking, but that the organization¡¯s gaze and language surrounding the same technology have not yet been aligned into one.

    Why Do People Looking at the Same AI End Up Saying Completely Different Things?
    The way executives view AI is generally macro-level. Their attention is focused on market shifts, responses to competitors, cost reduction, productivity improvement, new revenue models, and explainability to shareholders and boards. From this perspective, AI appears to be a wave they must get on before it is too late, and if used well, it looks like a strategic asset that can lift the entire organization to another level. That is why, in executive meetings, AI often appears in the language of ¡°opportunity.¡± The expectation comes first: work can be done faster, more results can be achieved with fewer people, customer experience can be improved, and the organization can respond more nimbly to market changes. These expectations are not strange at all. In fact, AI is clearly creating efficiency in certain areas of work, such as drafting, organizing materials, searching, code assistance, and handling repetitive tasks.

    But the AI seen through the eyes of middle managers is far more concrete and rough-edged. Before they see the potential of the technology, they see the friction that arises when the technology enters actual work. Questions come all at once: who must be trained when a new tool is introduced, how the existing approval process should be changed, who will be responsible if flawed output appears, how far security or quality issues can be tolerated, and who will review the drafts produced by frontline employees. If AI is ¡°direction¡± to executives, AI is ¡°reality that must be handled¡± to middle managers. This difference is not merely a difference in attitude. Because they stand in different places, they see different things, and that is why they end up saying completely different things about the same technology.

    This is precisely where many misunderstandings arise. From above, middle managers seem passive toward change, while from the middle, executives seem blindly optimistic without understanding reality. But in truth, it is often not that one side is simply wrong. In many cases, they are seeing truths from different layers. The AI that executives see is the company¡¯s future; the AI that middle managers see is today¡¯s work. If the future and today are not connected, the organization becomes a body with direction but with its feet tied down. That is why what is needed in the age of AI is not simple technology adoption, but the work of transforming perceptions formed from different positions into one language of execution. The more that work is missing, the more the organization will keep talking about different realities while looking at the same AI, and from that moment, the delay in performance has already begun.

    Expectations Grow at the Top, but the Burden Falls into the Middle
    In many companies, AI adoption begins at the top. Executives speak of vision and speed, the market adds urgency, and the organization moves under pressure not to fall behind. But once actual execution begins, most of the burden falls on middle managers. They have to apply new tools to their teams, review the output produced by frontline employees, change existing processes, deliver results, and prevent accidents at the same time. The more a technology resembles generative AI, which is fast but also prone to errors, the heavier the burden on middle managers becomes. On the ground, the question ¡°What are the standards for using AI?¡± is far more important than simply being told ¡°Use AI,¡± yet those standards are usually sent downward without being clearly organized.

    One representative phenomenon appears at this point. At the top, there is the expectation that AI will reduce work, but on the ground, work often increases for a while. Drafts may come out faster, but they must be reviewed more often. Mistakes do not necessarily decrease, and because team members vary in how well they use the tools, the number of management points actually increases. In some organizations, AI writes documents quickly, but more time is required to verify whether the wording is accurate. In other teams, search and organization become faster, but the final verification process becomes more complicated because of the risk that incorrect information may be mixed in. In the end, AI does not eliminate labor all at once; rather, it first changes the shape of labor. The problem is that many organizations do not sufficiently calculate this transition cost.

    As a result, middle managers often receive two kinds of pressure at the same time. From above comes the question, ¡°Why aren¡¯t results appearing yet?¡± From below comes the complaint, ¡°Work has actually increased.¡± They must manage optimism on one side and soothe fatigue on the other. This role is heavier than it appears. Managers must behave like evangelists of change while also serving as breakwaters against risk. As a result, AI adoption is packaged as an innovation project for the whole organization, but in reality, it is often sustained on top of the invisible additional labor of the middle layer.

    If this burden structure is not seen properly, companies keep repeating the same mistake. Expectations for AI rise, but trust on the ground falls. Executives ask, ¡°Why is this so slow?¡± while managers feel, ¡°Even the standards for how we are supposed to move are unclear.¡± In the end, execution is delayed, and within the organization, fatigue surrounding AI accumulates before AI itself does. What is needed now is not stronger encouragement, but more precise design. It is necessary to reveal who is carrying what burden, at what stage work is increasing, and which standards are missing. The more expectations rise at the top, the more carefully the organization must examine what form of labor and responsibility those expectations take as they spread downward. Otherwise, AI becomes not a tool of innovation, but just another exhausting task within the organization.

    The Real Bottleneck Blocking Performance Is Not Technology, but the Temperature Gap Inside the Organization
    When AI projects fail to produce results as quickly as expected, many companies first look for technical reasons. They say there is not enough data, there are security issues, model quality is inconsistent, system integration is difficult, or budgets are insufficient. Of course, these issues are genuinely important. But in many companies today, the bigger bottleneck arises not from the technology itself but from the temperature gap inside the organization. Executives think of AI as ¡°a path we must already be on,¡± while middle managers and frontline employees accept it as ¡°an uncertain task that has not yet been organized.¡± One side demands speed with confidence, while the other feels that mechanisms to reduce uncertainty must come first. In this state, even a good tool cannot quickly produce results.

    This temperature gap appears in many forms. In some companies, executives use AI almost as if it were essential vocabulary, while on the ground it is still not even clear how far it should be used. In other companies, AI use is encouraged, but if an error occurs, responsibility remains with the individual. Then people naturally begin to move conservatively. The message to use it actively comes down from above, but if something goes wrong, the burden must be borne below. In such a situation, an organization may appear outwardly AI-friendly, yet in reality a very cautious and passive culture of use is likely to take root.

    This temperature gap eventually turns into cost. First, projects slow down. Second, defensive calculations about where AI might be risky come before thinking about where it is useful. Third, only some teams use it experimentally while the rest of the organization does not follow, which blocks organization-wide expansion. Fourth, the method for measuring performance remains vague, preventing success and failure from being turned into learning assets. Fifth, fatigue and cynicism surrounding AI accumulate within the organization. In the end, what is more frightening than technical limits is the spread of the feeling that ¡°this does not really fit our work.¡± Once this sentiment takes hold, even better models and better systems entering later will face an organization whose mind is already closed.

    That is why the central question in the age of AI is not only ¡°Which model should we use?¡± The more important question is, ¡°What reality does our organization believe AI to be?¡± If the top calls it innovation while the lower levels call it risk, then the organization¡¯s language has already split. An organization with split language cannot help but fall out of step in execution. It can make technology investments, but it cannot achieve technological transformation. The difference between those two things is enormous. An organization that has only invested will always wait for results, while an organization that has successfully transformed will see its very way of working change.

    If a company truly wants to solve the bottleneck surrounding AI, it must first look not only at the technology roadmap but at the map of perceptions across the organization. It must identify which departments are overly expectant, which teams are frozen defensively, what middle managers see as the greatest risks, and what frontline employees find most cumbersome. Only then can it distinguish technical problems from organizational ones, and only then does it become clear where resources should be invested÷áìý. Ultimately, the success or failure of AI is not determined only on a performance chart. The far more decisive variable is how quickly the organization can align the felt temperature surrounding that technology.

    Middle Managers Are Not a Resistance Force, but Translators
    In organizations where AI adoption stalls, middle managers are often described as obstacles. They are said to be passive toward change, unable to trust new methods, and always focused only on risk. But this view is far too simplistic. Middle managers are usually not enemies of change, but translators of change. They are the people who turn the strategic language coming down from above into working language that frontline employees can understand, and who turn the problems coming up from the ground back into reportable language that decision-makers can act on. If this translation process fails to function properly, the organization easily splits into two worlds. At the top, people ask, ¡°Why won¡¯t they use such a good technology?¡± Below, people respond, ¡°Why are they shouting only about adoption without understanding reality?¡±

    The role of this translator has become even more important in the age of AI. As with previous digital transformations, generative AI in particular makes it harder to establish standards because the quality of outputs and the range of possible uses vary each time. Frontline employees want speed, executives want results, and legal or security departments want control. The person who must coordinate these different demands within a single workflow is the middle manager. Yet many companies treat this layer not as a co-designer of strategy, but merely as a transmitter. Then middle managers are left with responsibility but without design authority, and in that state AI naturally becomes an uncomfortable assignment.

    The reason middle managers must be placed at the center is simple. In reality, the point where organizational change stops is usually revealed at this layer. Frontline employees may at least try a tool if they are told to use it. Executives can set direction and allocate budgets. But practical issues such as day-to-day operation, performance evaluation, risk control, workforce allocation, and adjusting work priorities are mostly interpreted and decided at the manager level. In other words, AI that this group has not accepted is unlikely to take root in an organization. On the other hand, AI that this group properly understands and empathizes with spreads much faster. Managers are not merely a channel; they are a central axis of diffusion.

    So what companies really need to do is not persuade middle managers, but create conditions in which middle managers can move realistically. For example, they need to clearly organize quality standards for AI use, avoid leaving responsibility for errors solely with individuals, and specify which tasks may invite experimentation and which require strict review. It is also necessary to formally recognize the hidden labor newly added to managers through AI adoption, such as reviewing outputs, guiding team members, and managing usage standards. Only then can managers accept AI not as an additional burden, but as a manageable change.

    Organizations that push technology adoption while ignoring this layer may appear to move quickly on the surface, but internally they easily increase friction. By contrast, organizations that recognize middle managers as the central translators of strategy achieve execution that is actually more stable. The reason is simple. The language between strategy and reality becomes aligned. In the end, middle managers in the AI era are not a resistance force, but a connecting device. Without them, an organization cannot join the ambition above with the reality below. And in organizations where that connection is broken, no matter how good the technology is, it is difficult for it to lead to performance in the end.

    More Important Than Adoption Is the Redesign of Work
    The easiest trap many companies fall into when talking about AI strategy is thinking that bringing in a tool itself constitutes change. They purchase a new platform, distribute internal accounts, create training videos, and run pilots in a few teams, and it feels as though some major transformation has begun. But the real change begins after that. If it is not decided which tasks AI will handle, where humans will make judgments, which outputs can be used immediately, which must be reviewed, and how existing approval and reporting systems should be changed, then AI merely adds one more step on top of the old work. In that case, there has been adoption, but there has been no transformation.

    Work redesign matters because AI does not simply replace existing work. In most cases, AI forces organizations to ask again about the order of work, the method of review, and the structure of responsibility. Work that used to be created entirely by a person can now be shifted into a model in which AI generates the first draft and a person refines it. Then the human role moves from simple producer to editor, judge, and reviewer. The problem is that many organizations do not officially recognize this shift as a formal change in work. Roles have changed, but evaluation standards remain the same; procedures have changed, but responsibility structures remain in their old form. As a result, people find themselves in an unstable and ambiguous condition even while using AI.

    Generative AI in particular produces results quickly, but the quality of those results can fluctuate greatly depending on the situation. That is why it can be highly useful in some tasks, while in others the added cost of review may become even greater. In areas such as idea drafting, sentence refinement, internal search, meeting summaries, and repetitive responses, substantial efficiency may be achieved. But work with major legal responsibility, work highly sensitive to factual error, or work that directly touches customer trust requires much greater caution. If an organization tries to apply AI to every task at the same speed, friction will inevitably arise. Each organization, each department, and each type of work requires a different approach.

    That is why what truly matters is not the slogan of ¡°organization-wide adoption,¡± but ¡°task-by-task design.¡± It is necessary to distinguish in detail where AI saves time, where humans must retain final judgment, which departments are centered on accuracy, and which are centered on speed. Methods of measuring performance must also be rebuilt. The goal is not simply to increase usage volume, but to see whether errors were reduced, whether time spent on repetitive work declined, whether decision-making speed improved, and whether customer satisfaction rose. Only then can AI become not a trendy tool, but a means of transforming organizational productivity.

    Work redesign is also a measure of how seriously an organization takes AI. An organization that merely distributes tools treats AI as one option, while an organization that restructures work sees AI as a change in the way work itself is done. The difference between the two is likely to widen over time. The former will always feel that ¡°results are arriving more slowly than expected,¡± while the latter will finally begin to accumulate knowledge about where AI truly works. Performance is not something technology produces automatically. It is created only when, after the technology enters, people¡¯s roles, standards, and workflows are redesigned to fit it.

    In the Future, Competitiveness Will Be Determined More by Coordination Gaps Than by Technology Gaps
    Many people think that future differences between companies will be determined by how quickly they secure better models. That is true to some extent. But over time, a more fundamental gap is likely to emerge not from the technology itself but from the ability to coordinate. Even when using similar AI tools, some companies produce tangible results while others endlessly repeat pilots. That difference cannot be explained only by differences in model performance. A far more important variable is how quickly the strategy above, the coordination in the middle, and the execution below align in one direction.

    AI requires coordination across much broader layers than traditional IT systems. It is not only the concern of the technology department, but also intertwined with HR, legal, security, business teams, training, and evaluation systems. Executives want speed, business units want practicality, and control functions want safety. If these three do not move together, one side¡¯s restraint will hold back the others. That is why competitiveness in the age of AI is revealed less by ¡°what was purchased¡± than by ¡°how quickly differing demands were turned into an agreed structure.¡± Organizations that succeed in this coordination can turn AI from a partial experiment into the operating method of the organization. Those that fail remain stuck in the gap between declaration and reality.

    This ability to coordinate is also a matter of culture. An organization that interprets anxiety from the ground not as a problem signal but only as resistance easily deepens internal cracks. By contrast, an organization that treats the concerns of managers and frontline employees like data, and adjusts standards and processes to reflect those concerns, can raise the speed of change far more stably. In the age of AI, optimism alone is not enough, and conservatism alone is not enough either. What is needed is an operational sense that can handle optimism and caution together. The organizations most likely to move ahead are those that quickly share what works and what fails in each department, do not hide failure but revise it, and give managers not only responsibility but also authority and standards.

    In the future, companies that use a lot of AI may not be the strongest; companies that fit AI well into their organization may be stronger. Here, ¡°fit it well¡± does not mean only that they have finely tuned a model. It means that they have created a shared sense within the organization about who uses AI in what way, what humans remain finally responsible for, and what kinds of performance can realistically be expected. An organization that has this shared sense can correct itself quickly even when it encounters trial and error. By contrast, an organization that lacks this sense is more likely to let a single failure spread into distrust of the entire technology.

    In the end, a company¡¯s real ability in the age of AI does not reveal itself in flashy announcements of adoption. It reveals itself in the choices it makes when executive ambition collides with frontline fatigue. Those above see more broadly, those below see more concretely, and those in the middle feel things as more complicated. Only companies that can bind these different senses into a single execution capability will be able to turn AI from a cost into an asset. That is why future competition is likely to be determined more by coordination gaps than by technology gaps. The ability to shorten the time lag surrounding AI, the ability to bind different perceptions of reality in the same direction, and the ability to think first about redesign rather than mere adoption will ultimately determine long-term corporate competitiveness.

    AI has already entered many companies. But entering and taking root are different things. As more companies come to feel this difference, the question will also change. The more important question will no longer be, ¡°Have we adopted AI?¡± but rather, ¡°Have we aligned the organizational language surrounding AI?¡± The reason performance often arrives slowly is not, in many cases, because the technology is lacking. It is because an organization moving at different speeds has not yet found one rhythm. And the companies that find that rhythm first will ultimately be the ones that seize performance first in the age of AI.

    Reference
    Harvard Business Review. ¡°Managers and Executives Disagree on AI—and It¡¯s Costing Companies.¡± April 8, 2026.
    Harvard Business Review. ¡°Close the Gap Between AI Ambition and Execution.¡± April 15, 2026.