Samsung announced that it has developed what it calls the world’s first High Bandwidth Memory (HBM) that is also integrated with the Korean tech giant’s own artificial intelligence (AI) processing power. Officially known as HBM-PIM, the new memory architecture is designed more for data centres and HPC systems, rather than consumer-ready apps.
What sets the HBM-PIM apart from other computing systems and memory units is how Samsung’s new component is able to bring processing power directly to where data is stored via parallel processing and a minimising of data movements. Simply put, the traditional method of using a separate processor and memory units to conduct complex calculations (also known as the von Neumann architecture) is considerably slower. Compared to what the new memory architecture brings to the table.
Additionally, should the HBM-PIM be paired with Samsung’s currently existing HBM2 Aquabolt solution, the company says that its new architecture will be able to deliver more than twice the system performance, while also cutting down power consumption by approximately 70%.
At the time of writing, Samsung says that its HBM-PIM is currently being tested within AI Accelerators being operated by its partners.