ºÎ»ê½Ãû µµ¼­¿ä¾à
   ±Û·Î¹ú Æ®·»µå³»¼­Àç´ã±â 

åǥÁö






  • Computing After Moore's Law: Where Are We Heading?

    The End of the Moore's Law Era
    One of the greatest engines of digital innovation over the past 60 years has been Moore's Law. In 1965, Gordon Moore, co-founder of Intel, observed that "the number of transistors on a chip doubles every two years," and this simple observation became the compass for the global semiconductor industry. The increase in the number of transistors led to faster speeds, improved power efficiency, and lower manufacturing costs, enabling computers to evolve to become smaller, faster, and more affordable.

    This advancement has driven transformations across industries, including finance, healthcare, telecommunications, energy, defense, climate modeling, and autonomous vehicles. However, since the mid-2010s, Moore's Law began facing physical limits. In processes below 5 nanometers, quantum tunneling, leakage current, and heat generation have become significant obstacles, and the miniaturization of transistors no longer directly translates into performance gains. We have entered an era where simply increasing transistor density is no longer sufficient to enhance computing power.

    New Approaches Beyond Physical Limits
    The end of Moore's Law does not signal the end of computing advancement. Rather, it marks a turning point where computing technologies are expanding in various directions. These include special-purpose processors, new system architectures, 3D stacking technologies, and material innovations.

    Instead of relying solely on general-purpose CPUs, processors optimized for artificial intelligence (AI), graphics processing, and cryptographic calculations are gaining traction. Heterogeneous computing, which integrates CPUs, GPUs, and FPGAs within a single system to distribute tasks effectively, is maximizing overall efficiency.

    3D stacking technology vertically layers chips to shorten signal transmission distances and save space. This is becoming increasingly common in high-performance computing and compact devices such as smartphones.

    Moreover, new materials such as carbon nanotubes, graphene, and gallium nitride (GaN), known for high electron mobility and low heat generation, are drawing attention as the basis for next-generation semiconductors. These innovations aim not just to shrink transistors but to maximize performance through fundamentally different approaches.

    The Rise of Special-Purpose Chips in the AI Era
    The rapid development of AI requires new computational architectures. The computational demands of training large-scale neural network models are beyond what traditional CPUs can handle. As a result, GPUs, TPUs, and NPUs have emerged as key technologies.

    GPUs, with thousands of cores for parallel processing, are used not only for gaming graphics but also for AI training, cryptography, and financial simulations. NVIDIA's GPUs have become essential infrastructure for training and deploying generative AI models, especially since the AI boom of 2023. TPUs, custom-designed by Google, power Google Search, Translate, and other AI services that handle large-scale text and image data efficiently.

    NPUs are used in edge devices like smartphones and IoT devices to process AI computations in real-time. For example, Samsung's latest smartphones include Exynos NPUs developed in-house to perform offline tasks such as photo enhancement, voice recognition, and translation, enabling high-performance AI features while protecting personal data.

    Recently, neuromorphic computing, which mimics the structure of the human brain, has also gained attention. Intel's Loihi chip simulates synaptic behavior to perform AI calculations with ultra-low power consumption and is being experimentally used in robotics and autonomous driving. These AI-specialized chips are evolving beyond mere speed improvements to address power consumption and security issues, aiming for overall system optimization.

    The Possibilities and Challenges of Quantum Computing
    Quantum computing represents a new paradigm with the potential to drastically enhance computational capabilities, overcoming the limitations of traditional digital computing. While classical computers process information in binary states (0 or 1), quantum computers use qubits, which can exist in a superposition of both 0 and 1. Additionally, entanglement between qubits enables parallel computation, allowing certain problems to be solved much faster than with current supercomputers.

    One of the most well-known applications is cryptographic decoding. RSA encryption, which is widely used today, relies on the difficulty of factoring large numbers. However, quantum algorithms like Shor's algorithm can factor these numbers far more efficiently, posing a threat to current internet security systems. In response, countries are developing post-quantum cryptography, and the U.S. National Institute of Standards and Technology (NIST) has adopted quantum-resistant encryption standards.

    Quantum computing also holds promise for drug discovery. Accurately simulating molecular structures is a complex task even for supercomputers, but quantum simulations can process this complexity more rapidly. Companies such as Merck in Germany and Pfizer in the U.S. are already working with IBM and D-Wave to experiment with quantum-based drug modeling.

    Nevertheless, quantum computing still faces technical challenges such as qubit instability, error correction, and cryogenic system requirements. Competing approaches—including silicon qubits, superconducting qubits, ion traps, and topological qubits—are still vying for commercial viability. Despite this, major countries and corporations are making significant investments, with expectations of practical applications emerging around 2030.

    The Evolution of Software and Algorithms
    As hardware advances slow, software continues to hold tremendous potential. Compiler technology, operating systems, and algorithm optimization can enhance performance by severalfold on the same hardware. A basic example is how bubble sort and quick sort, though solving the same problem, differ vastly in time complexity and execution speed. Choosing the right algorithm alone can significantly improve performance.

    Recent attention has been drawn to automatic parallelization and memory optimization technologies. High-performance simulations, 3D rendering, and financial modeling benefit greatly from multithreaded processing. Modern compilers analyze source code to remove unnecessary operations and reorder instructions to improve cache efficiency and power consumption.

    Software optimization using AI is also growing rapidly. For instance, AI compilers like Facebook's Glow and Google's XLA analyze trained model structures and generate hardware-specific execution code. These technologies not only support high-performance computing but also enhance AI functionality in small devices like smartphones and microcontrollers.

    Software remains a vital area for computing performance innovation even without new hardware, and co-design between hardware and software is expected to become a core strategy for future computing efficiency.

    The Expansion of Edge and Distributed Computing
    Edge computing processes data at its point of origin, reducing latency and easing the burden on central networks. A prime example is autonomous vehicles, which must process hundreds of camera images, LiDAR data, and sensor inputs per second. Sending this data to a central server would cause delays, so onboard edge processors analyze data in real time and synchronize with servers only when necessary.

    In healthcare, telemedicine equipment and wearable devices analyze biosignals in real time to detect abnormalities and send alerts. Edge processors monitor heart rate, oxygen saturation, and temperature changes, enabling immediate responses.

    Edge computing is also vital for privacy. Functions like voice recognition, facial recognition, and photo classification on smartphones are increasingly processed locally to reduce hacking risks. Apple, for example, installs neural processing units in iPhones to handle AI tasks directly on the device.

    Going forward, the roles of cloud, edge, and local devices will become more sophisticated and integrated through distributed computing platforms. The importance of edge computing will continue to grow due to the proliferation of AI, real-time responsiveness, and energy efficiency.

    New Computing That Transforms Industrial Landscapes
    The end of Moore's Law does not mean the digital industry is in decline. Rather, emerging technologies are fundamentally transforming various sectors. In finance, high-performance computing is essential for real-time market analysis. In healthcare, AI-based precision diagnostics, genome analysis, and treatment simulations are becoming commonplace.

    Large-scale computational capabilities are also central to infrastructure in fields such as climate modeling, renewable energy distribution, and smart transportation systems. Future industries will require not only high hardware performance but also computing capabilities that consider energy efficiency, reliability, and intelligence.

    The Future and Strategic Response of Korea's Semiconductor Industry
    South Korea holds a competitive edge in memory semiconductors globally. However, in the post-Moore's Law era, manufacturing alone is no longer enough to maintain this advantage. A strategic shift toward next-generation computing technologies and ecosystem development for system semiconductors is essential.

    First, Korea must enhance its design and production capabilities in system semiconductors, including application processors (APs), power semiconductors, and AI accelerators. The U.S. dominates with fabless giants like NVIDIA, AMD, and Apple, while Taiwan leads advanced foundry technology with TSMC. Although Samsung Electronics manages both system semiconductor and foundry operations, Korea lacks a robust ecosystem of fabless SMEs. To address this, the government and private sector must jointly support tech startups, talent development, IP acquisition, and prototyping infrastructure.

    Second, securing leadership in next-generation memory technologies—such as processing-in-memory (PIM), magnetoresistive RAM (MRAM), and resistive RAM (ReRAM)—is vital. Samsung, in particular, is pioneering PIM technology to boost AI server performance and may set future AI semiconductor standards.

    Third, localization of materials, parts, and equipment is crucial. Japan's 2019 export restrictions exposed vulnerabilities in Korea's semiconductor supply chain. While localization efforts have since intensified, dependence on imports remains high for critical components and tools. Supporting domestic equipment companies and acquiring foreign technologies through partnerships or M\&A is essential.

    Lastly, consistent policy support is needed. The U.S. provides massive subsidies through the CHIPS Act, and both Europe and China have designated semiconductors as strategic industries. Korea must also offer long-term support through expanded tax credits, land access, and technical education programs.

    Toward a New Era of Computing
    Moore's Law may have come to an end, but the evolution of computing has just begun. Innovations in materials, architectures, algorithms, and distributed structures are rapidly advancing, creating new values in intelligence, efficiency, and reliability beyond raw speed.

    Future computing will move from "smaller and faster" to "smarter and more flexible." This shift is not only a technical evolution but a fundamental transformation of industrial and social structures.