글로벌 트렌드내서재담기 


  • [Information Technology/Productivity]

    The Biggest Implications of Artificial Intelligence for the 2020s

    By Global Trends Editor Group

    As with prior transformative general-purpose technologies including electricity, steel, assembly lines, and steam engines, the productivity enhancing potential of artificial intelligence will eventually change nearly every industry, directly or indirectly.

    As highlighted in our December 2022 issue, the capabilities and costs of artificial intelligence are on a trajectory toward mass-market take-off. Extraordinary improvements in sensor technology, cloud-based infrastructure, AI accelerator hardware, and AI software tools is happening just as companies are assembling the datasets and the talent needed to exploit this technology.

    Furthermore, the pandemic accelerated trends in labor force demographics and customer needs, which have revealed a greater-than-expected need for AI-based solutions.

    This is the sort of convergence of technology, demography and behavior which typified previous techno-economic revolutions. However, just as with electricity the gulf between fundamental discoveries and economically transformative solutions is not being bridged everywhere at the same time.

    To extend the analogy to electricity, AI has now moved well beyond the era of Volta and Faraday, but it’s just entering the era of Edison and Tesla. And game-changing everyday solutions analogous to electric lighting, movies and “the electric utility grid” are in their infancy.

    For this reason, many consumers, managers and investors believe AI is as over-hyped and premature as “nuclear fusion power” or “quantum computing.” To properly understand its potential in terms of both opportunities and threats, it is important to recognize that, like integrated circuits and steel, AI is a multi-faceted, general-purpose transformative technology.

    It has the potential to transform an unimaginable array of products, services, business processes and industries, some of which can’t now be forecast. By comparison nuclear fusion power will simply be a cheaper and cleaner source of electricity, while quantum computing is a tool theoretically capable of solving an important, but limited set of problems which don’t lend themselves to classical computing.

    So, it’s vital that managers and investors anticipate where and when AI will have its greatest success. Then, based on that understanding, they’ll need to consider how it’s likely to diffuse across the broader economy delivering value which can be captured. As of early 2023, the fog is rapidly clearing, giving us a better understanding of the opportunities, as well as the threats.

    Consider the facts we know and what they are telling us.

    Let’s start with the lessons of history.

    In 1930, we had just entered the transition phase of the Mass Production Techno-Economic Revolution; that’s analogous to where the Digital Techno-Economic Revolution stood in 2001.

    Within that context, famed economist John Maynard Keynes made a series of predictions about technological unemployment arising from what he described as, “our discovery of means of economizing the use of labor outrunning the pace at which we can find new uses for labor.”

    Had Keynes forecasts been correct, Americans would now be struggling with how to purposefully deploy all the newfound leisure time created by technological change rather than working on average as many, if not more, hours per week than in 1930.

    Admittedly, some American jobs disappeared. But what in fact happened, was that globalization transferred jobs to where they were more cost-effective given the technologies of the late 20th and early 21st centuries.

    And to a large extent it was the computing revolution of the past 30 years, which enabled that just-in-time globalization of supply-chains. However, as we’ve previously documented, even the job killing effects of globalization are reversing. 

    National security issues and demographic trends have suddenly mandated “reindustrialization” across the OECD countries, and especially in North America. This means “growth momentum” will shift from services to goods for the first time since World War II. And that shift demands increased productivity in the advanced economies.

    This era of reindustrialization will come just as the capabilities and costs of AI finally make it viable for large-scale, transformative deployment.

    This “phase change” will come as a shock to many policymakers and managers, because the initial deployment of rudimentary AI in ecommerce and other areas hardly “moved the needle” on key labor market performance indicators such as labor productivity and multifactor productivity growth.
    If, as Keynes would have expected, AI-driven technological change is enabling new means of economizing the use of labor to outrun the pace of finding new ways to use it, we would expect to see both statistics rising as AI becomes more prevalent.

    However, according to the exhibits in the printable issue, the exact opposite appears to have happened in a wide range of OECD countries. That is, productivity growth dropped just when AI emerged.

    That is, we see AI and computing creating whole new industries, but the aggregate productivity growth numbers have slowed to a glacial pace. How do we explain this paradox?

    To a large extent the problem involves our ability to measure and capture the value of inputs and outputs.

    Ever since the beginning of the industrial revolution in the late eighteenth century, society has been repeatedly misled by the so-called “lump of labor fallacy.” This widespread economic misconception was first documented in 1891 by economist David Schloss.

    The “lump of labor fallacy explains the human bias toward assuming that both the amount of work to be done in an economy (that is the job supply) and the number of people who want to do work (that is the demand for jobs from workers) are neither elastic nor subject to radical innovation. However, the evidence shows this is far from the case.

    Consider an example familiar to every Baby Boomer. The introduction of spreadsheets dropped the price of running “what-if business scenarios.” Rather than decimating the market for accountants and business analysts, spreadsheet software actually expanded it because the demand for scenarios was very responsive to price.

    Meanwhile, the resources freed from calculating simple cases were deployed in developing more complex scenarios to test.

    Another, and likely bigger, factor in the aggregate productivity numbers is the difficulty in quantifying the productivity gains associated with Internet platform applications in general and “AI functionality” in particular.

    Applications like GPS navigation, Siri, Google Search, Google Translate, Alexa, and social media as well as various product recommendation engines, chatbots and configurators, create trillions of dollars in value for end-users each year, but are largely “free services” funded by marketing budgets. And since they usually provide “new-to-the-world functions,” their impact on the labor force is also difficult to assess.

    So, it’s safe to say that up to now the economic impact of AI has been significant, but difficult to quantify. And its impact on commerce and on the workforce are different from those generally observed during the first four techno-economic revolutions.

    Specifically, recent developments indicate that AI is going to have its greatest and most immediate impact within the highest realms of the workforce. That means lower paying and more mature industries are not the one’s most likely to see disruption, at least in the short-term.

    AI will eventually impact taxi drivers, warehouse workers and truck drivers as well as rank-and-file workers in manufacturing, retail, personal care and food service. However, the game-changing applications of the 2020s will be in the rarified fields of scientific research, medicine, software development and engineering as well as in the relatively new industry of ecommerce.

    And that’s good, because it’s those areas which have been most constrained by human limitations and the skills shortages of recent years.

    Consider the implications for lower-level workers, normally seen as most at risk from automation.

    For example, AI progress in terms of replacing drivers has largely stalled. There is simply too much liability associated with autonomous trucks and automobiles on the highways.

    So, even though the technology has been proven, sorting out the institutional environment in which AVs will operate remains a logjam. Autonomous vehicles can contain thousands of software-controlled sensors and processors interacting in a complex network with each other just to run the car or truck.

    These are required to perform flawlessly if the vehicle is to detect the near-infinite array of situations it may find itself in when interacting with pedestrians, pets, balls, weather, random hazards, and other vehicles using its right of way. And since some are human controlled, these are less predictable than those run by software and hardware.

    One of the most important issues is assessing liability when something goes wrong. For that reason, transport licensing authorities remain unwilling to let these vehicles loose on the roads without having appropriate regulations in place.

    The Geneva and Vienna international conventions on road traffic (of 1949 and1968, respectively) assume all vehicles on the road are controlled by a human being and they ultimately hold that the human is responsible for any damage the vehicle or its power source may cause.

    But in the autonomous vehicle world, who is responsible if there isn’t a human driver? The vehicle owner may have some liability. But when the requisite safety features are software-governed, how can owners know or verify that the correct version of software is loaded and operational at any given moment? And who is the “owner”?

    If we take the approach used for cellphones and their apps, the users typically own the plastic and metal the phones are made of, and they have a contractual liability to pay a network operator for it to be connected. However, the smartphone user has negligible ownership rights to anything running on the phone. Until these institutional issues and any operational concerns are totally resolved, truck and Uber Drivers have plenty of job security.

    The same is true for genuinely autonomous air taxis. Increasingly, it looks like the air taxi industry will emerge beginning in 2025 and grow very rapidly in the coming decades. However, these aircraft will initially have human pilots on board. Later, these pilots will operate multiple, semi-autonomous aircraft via remote control.

    The story is similar in most other industries. Despite the accelerating collapse in price-performance for sensor technology, cloud-based infrastructure, AI accelerator hardware, AI software tools and even robotic devices, we’re a long way from cost-effective replacement for factory workers and elder care personnel. A new study by Goldman-Sachs made this clear.

    In that economic study, Goldman-Sachs created four possible scenarios for humanoid robot adoption over the next 13 years largely targeted at factory & warehouse applications in the United States, as well as elder-care applications, globally.

    The base case involved introducing a functional humanoid robot in 2025 at $250,000 per copy. That estimate was based on the general characteristics of Tesla’s Optimus robot prototype unveiled in September 2022; to find initial use cases, the company says Optimus will be trained in the factory over the next year.

    The average hourly wage for a Tesla factory worker working 8 hours a day is $23.75. By contrast, Optimus is assumed to work 20 hours a day, with 4 hours a day reserved for charging & maintenance.

    Under this base scenario, Optimus could reach a payback-period of two years for the period from 2025-to-2026. This indicates some feasibility for commercialization. The big question is, “What proportion of tasks within an automobile factory will such a humanoid robot be able to perform as well or better than a human worker?”

    Given the increasing global labor shortage due to demographics between now and 2035, Goldman-Sachs assumes that any difference in shipment volume under its “Global Humanoid Robot Base Case,” its “bull case,” and its “blue-sky case” are essentially a function of the robot’s cost.

    Under the base case, the unit cost of $250,000 is based on the Optimus bill of materials, and assumes Tesla purchases mid-to-high-end components at market prices for small volumes in 2025; after that a 15% per year cost reduction is assumed through 2035.

    Under the bull case, a 2025 unit-cost of $50,000 was calculated assuming Tesla purchased the components at a much cheaper price by leveraging its broader vehicle procurement team and that it attained the same 20% annual cost reduction documented for Tesla car production. Given the assumed elasticity of demand, this would enable Tesla to sell one million units a year by 2035.

    A so-called “Blue-Sky case” involves getting the cost to $20,000 per unit in the mid-2020s, as suggested by Elon Musk. This would theoretically kick-off “a global robot explosion” significantly impacting the expected U.S. and global labor shortage.

    However, since hardly anyone other than Elon Musk believes this scenario makes any sense, the Trends editors conclude that humanoid robots will address only a small fraction of the expected factory and elder care worker shortage over the next two decades.

    Coupled with the earlier assessment from the transportation sector, this analysis indicates that relatively few blue-collar jobs will be taken over by AI. And that’s especially true for jobs that combine sophisticated human sensory skills with the need for mobility and handling of unexpected events.

    In short, while plumbers, brick masons, carpenters, nurses and heavy equipment operators may be increasingly assisted by AI-based tools, their ranks are likely to grow rather than shrink over the next two decades.

    So, is AI really going to transform our economy and generate enormous wealth? Yes. And paradoxically, it’s happening fastest exactly where the first four techno-economic revolutions barely made a dent.

    That is, AI is making its biggest inroads in highly sophisticated technical areas where human capabilities are most easily overwhelmed. For instance, disease diagnosis, drug discovery, molecular design, and material design as well as industrial, electrical, and mechanical engineering.

    Objectively speaking, this should not come as a surprise. AI is particularly useful at identifying patterns and relationships as well as engaging in rigorous trial-and-error assessments, which are the essence of science.

    When combined with appropriate “automated laboratories,” AI systems are amplifying the performance of human researchers by orders of magnitude. And suddenly, the formerly impossible is becoming routine!

    Given this trend, we offer the following forecasts for your consideration.

    First, large natural language models will remain at the vanguard of AI sophistication and adoption through at least 2028.

    Some people who rely on words to make a living, including editors, customer service personnel, and translators are likely to be displaced from their current jobs by so-called Large Language Models like GPT-4 and LaMDA. That means it is likely that AI will begin to reduce employment for college-educated workers, within the next five years.

    As this technology continues to advance, it will be able to perform tasks that were previously thought to require a high level of education and skill. This will lead to a displacement of workers in many industries, as companies look to cut costs by automating processes.

    The specifics vary by task and industry, but one thing is clear: AI will have a significant impact on the job market for college-educated workers. So, it will be important for individuals to monitor developments in AI and to consider how their skills and expertise can be leveraged in a world where machines are increasingly able to perform many tasks.

    Fortunately, this will not happen overnight, and Americans in many of those roles have already been replaced by offshore personnel. Nevertheless, now is the time for word-oriented white-collar workers to start diversifying their skills, especially into areas where their skills complement functions not amenable to AI. That might include managing skilled blue-collar workers.

    Second, in the 2020s, market dominance in Artificial intelligence will shift from tech giants to an array of innovative start-ups, much as we saw with computing in the 1980s.

    Consider the latest developments in large language models which is one of the hottest areas of AI research. Powerful models such as OpenAI’s GPT-4 and Google’s LaMDA, can be used as chatbots as well as to search for information, moderate online content, summarize books, or generate entirely new passages of text, based on prompts.

    Until now, these cutting-edge tools have been restricted and proprietary. However, BLOOM (which stands for BigScience Large Open-science Open-access Multilingual Language Model) was created over the last year by over 1,000 volunteer researchers in a project called BigScience.

    BLOOM was coordinated by an AI startup called Hugging Face, funded from the French government and officially released on July 12, 2022. Now that it’s live, anyone can download BLOOM or tinker with it free-of-charge on Hugging Face’s website.

    Users can pick from a selection of 31 spoken languages and then type in requests for BLOOM to do tasks like writing recipes or poems, translating or summarizing texts, or writing computer code in 11 programming languages.
    Most importantly, AI developers can use BLOOM as a foundation to build their own applications. At 176 billion parameters, it is bigger than OpenAI’s 175-billion-parameter GPT-3, and BigScience claims that it offers levels of accuracy and toxicity similar to other models of the same size. Notably, for languages such as Spanish and Arabic, BLOOM is the first “full-scale” language model.

    Combined with an increasingly hostile regulatory and antitrust environment, this trend does not bode well for the continued dominance of today’s tech giants in the evolving AI space.

    Third, contrary to warnings of “job destroying AI,” the remainder of the 2020s will see demand soar for workers with “hard skills,” particularly in the United States.

    Plumbers, electricians, heavy equipment operators, truck drivers and automation technicians will all be in increasingly short supply as Baby Boomers retire and America dramatically upgrades its manufacturing base.

    For instance, in order to address the housing shortage driven by maturing Millennials and continuing immigration, the construction workforce, decimated after the 2008 housing crash will be rebuilt and augmented with new technology.

    Fourth, self-driving cars and trucks are on the way, but they won’t eliminate many driving jobs until the mid-2030s or later.

    As explained earlier, institutional hurdles are proving far higher than expected, especially as related to issues of liability.

    Fifth, small autonomous aircraft piloted by AI will become commonplace by 2040.

    Piloted air taxis will emerge as soon as 2025, creating employment opportunities for a new kind of pilot who relies heavily on AI-based assistance. However, truly autonomous air taxis will enter commercial service much later. In fact, that’s likely to happen in the same time frame that regulators and consumers become comfortable with Level V self-driving cars and autonomous 18-wheelers taking over the highways.

    On the other hand, unmanned cargo drones will become commonplace in the late 2020s, starting with remote-controlled operations. As explained in prior issues, this will make a huge impact on package delivery economics.

    And it will create demand for a growing cadre of ground-based drone pilots and maintenance personnel. By 2030, fully autonomous AI-based cargo drones with remote human intervention (in case of emergency) will be widely deployed.

    Sixth, even if humanoid robots can achieve extraordinary price and performance targets, non-economic factors are likely to prevent them from making a significant dent in the labor shortage by 2035.

    As recently highlighted by AEI economist James Pethakoukis, issues related to trust, safety and privacy are among the hard-to-quantify impediments to rapid adoption. Fortunately, some settings and cultures are more psychologically amenable to early adoption of humanoid robots.

    For instance, American factories, which extol “innovation,” are likely to become early adopters in the 2020s. Meanwhile Japan has the best history of accepting such leading-edge innovations in the consumer domain. Once they prove themselves in such environments, humanoid robots will diffuse into the global economy when and if the economics make sense.

    Seventh, Artificial intelligence will enable health care to enter a new era of enormous breakthroughs.

    AI is already making rapid progress in supporting diagnosis and treatment of disease especially in the areas of radiology and genomics. But it is AI’s accelerating contributions to drug discovery which will have the biggest impact, both commercially and therapeutically.

    Suddenly, advances in price-performance and functionality have opened the door to formerly impossible breakthroughs. For example, two new AI applications called Chroma and RoseTTAFold recently became the first full-fledged solutions that can produce precise designs for a wide variety of proteins. Both are able to generate proteins with multiple degrees of symmetry, including proteins that are circular, triangular, or hexagonal.

    To test whether Chroma produced designs that could be made, the team at Generate Biomedicines took the amino acid strings that make up the protein and ran them through another AI program which found that 55% of them would fold into the structure generated by Chroma; this suggests that these are designs for viable proteins.

    The RoseTTAFold team went further by synthesizing some of the protein designs in their lab. One was a new protein that attaches to the parathyroid hormone, which controls calcium levels in the blood. According to the team’s leader, “We basically gave RoseTTAFold the hormone and nothing else.

    Then we told it to make a protein that binds to the hormone.” When they tested the novel protein in the lab, they found that it attached to the hormone more tightly than anything that could have been generated using any other computational methods, and more tightly than any existing drugs.

    This early success shows why it’s likely that a wave of new game-changing drug discoveries will enter clinical trials over the next five years. The result will be healthier and happier consumers as well as enormous revenues and profits for the industry. And,

    Eighth, AI will trigger explosive game-changing advances in materials science during the 2020s.

    Materials, such as stone, bronze, iron, steel, plastics, silicon and graphene have always defined the technological and economic possibilities which permit people to survive and thrive. Society’s capacity to solve global challenges is still constrained by our ability to design and make materials with the targeted functionality needed for computer chips, sensors, robots, electric vehicles and myriad other applications.

    Since it is not known where economically important materials might exist, the search amounts to a high-risk, complex and often long journey across the infinite space of materials created by combining all of the elements in the periodic table.

    Fortunately, new AI-based tools, such as Material.ai, are changing all that. These tools examine the characteristics and relationships of known materials at a scale inconceivable for humans. These characteristics and relationships are used to identify and numerically rank combinations of elements that are likely to form new materials with desired characteristics.

    Those rankings are used to guide exploration of unknown chemical spaces in a targeted way, making experimental investigation far more efficient. - And it’s not just about being faster and cheaper.

    Until now, the default approach has been to design new materials by close analogy with existing ones, which usually leads to materials which are similar to ones we already have. On the other hand, the new AI-based tools discover truly new materials.

    And these new materials not only create societal benefit by enabling new technologies to tackle global challenges, but they also reveal new scientific phenomena and understandings.

    Those understandings then help train the next generation of AI. As with health care, AI’s contribution to material science will create enormous value leading to more jobs across the advanced economies.

    Resource List
    1. Trends. December 2022. The Trends Editors. Economic Realities Driving America’s AI-Based Reindustrialization.

    2. W. W. Norton & Co. 1963. John Maynard Keynes. Essays in Persuasion: pages 358-373.

    3. Law, Innovation and Technology: Volume 11. September 12, 2019. Nynke E. Vellinga. Automated driving and its challenges to international traffic law: which way to go?

    4. Faster, Please!. November 5, 2022. James Pethokoukis. Will the Next Big Thing be (finally) humanoid robots?

    5. MIT Technology Review. May 20, 2021. Karen Hao. The race to understand the exhilarating, dangerous world of language AI.