ºÎ»ê½Ãû µµ¼­¿ä¾à
   ¹Ìµð¾î ºê¸®Çνº³»¼­Àç´ã±â 

åǥÁö





  • [Global Technology Briefings]

    Thousands of Conductance Levels in Memristors Integrated on CMOS

    By Mingyi Rao, NATURE, March 23, 2023

    Today, everyone is talking about artificial intelligence and the power of neural networks. They often forget that this software is limited by the hardware on which it runs. And it is hardware which has become ¡°the bottleneck.¡±

    That¡¯s because the demands of the software have outrun the advance of the hardware. Over the past 30 years, the size of the neural networks needed for AI and data science applications has doubled every 3.5 months, on average.

    Meanwhile, the hardware capability needed to run them doubled only every 3.5 years. As a result, hardware presents a more and more severe problem for AI.

    Governments, industry, and academia are trying to address this hardware challenge worldwide. Some continue to work on hardware solutions with silicon chips, while others are experimenting with new types of materials and devices.

    New work from researchers at USC, MIT, and the University of Massachusetts falls into the middle. Their new research just published in Nature, focuses on exploiting and combining the advantages of new materials in conjunction with traditional silicon technology in order to support heavy AI and data science computation.

    The new research focuses on the fundamental physics that leads to the drastic increase in memory capacity needed for AI hardware. Experiments at TetraMem, a startup company co-founded by the authors, demonstrates the practicality of using this protocol in integrated chips intended to commercialize AI acceleration technology.

    According to the researchers, (at 11 bits) this new memory chip has the highest information density per device among all known types of memory technologies, thus far. And this new chip technology is not just intended for memory, but also for the processor.

    Millions of these components, working in parallel in a small chip could rapidly run your AI tasks, while only requiring a small battery to power the device.

    The new chips combine silicon with metal oxide memristors in order to create powerful hardware with low-energy intensity. This technology uses the positions of atoms to represent information rather than the number of electrons, which is the current technique involved in computations on chips.

    The positions of the atoms offer a compact and stable way to store more information in an analog fashion, rather than in a digital fashion. Moreover, the information can be processed where it is stored instead of being sent to a limited number of dedicated ¡®processors.¡±

    This eliminates the so-called ¡®von Neumann bottleneck¡¯ existing in current computing systems. As a result, computing for AI becomes ¡°more energy efficient with a higher throughput.¡±

    How does this new chip work and why is it better? Instead of storing memory using electrons, the new technology stores memory using full atoms. When you turn off a computer, the information in memory is usually gone.

    But, if you need that memory to run a new computation and your computer needs the same information all over again, you have lost both the time and energy needed to reload it. The new method focuses on activating atoms rather than electrons, and it does not require battery power to maintain stored information.

    This is especially relevant for scenarios which happen in AI computation, where a stable memory capable of high information density is crucial. According to the researchers, this new technology may enable powerful AI capability in ¡°edge devices,¡± such as Google Glasses and Smart Watches, which previously suffered from frequent recharging issues.

    Furthermore, by converting chips to rely on atoms as opposed to electrons, chips become smaller. With this new technology, there is more computing capacity at a smaller scale. And this technology, could offer ¡°many more levels of memory to help increase information density.¡±

    To put it in context, right now, ChatGPT is running on a cloud. The new TetraMem innovation, followed by some further development, could put a mini version of ChatGPT in everyone¡¯s personal device. So, it could make such high-powered AI more affordable and accessible for all sorts of applications.

    NATURE, March 23, 2023, ¡°Thousands of conductance levels in memristors integrated on CMOS,¡± by Mingyi Rao, et al. © 2023 Springer Nature Limited. All rights reserved.

    To view or purchase this article, please visit: