Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy.ย Learn more
As AI transforms enterprise operations across diverse industries, critical challenges continue to surface around data storageโno matter how advanced the model, its performance hinges on the ability to access vast amounts of data quickly, securely, and reliably. Without the right data storage infrastructure, even the most powerful AI systems can be brought to a crawl by slow, fragmented, or inefficient data pipelines.
This topic took center stage on Day One of VB Transform, in a session focused on medical imaging AI innovations spearheaded by PEAK:AIO and Solidigm. Together, alongside the Medical Open Network for AI (MONAI) projectโan open-source framework for developing and deploying medical imaging AIโthey are redefining how data infrastructure supports real-time inference and training in hospitals, from enhancing diagnostics to powering advanced research and operational use cases.
Innovating storage at the edge of clinical AI
Moderated by Michael Stewart, managing partner at M12 (Microsoftโs venture fund), the session featured insights from Roger Cummings, CEO of PEAK:AIO, and Greg Matson, head of products and marketing at Solidigm. The conversation explored how next-generation, high-capacity storage architectures are opening new doors for medical AI by delivering the speed, security and scalability needed to handle massive datasets in clinical environments.
Crucially, both companies have been deeply involved with MONAI since its early days. Developed in collaboration with Kingโs College London and others, MONAI is purpose-built to develop and deploy AI models in medical imaging. The open-source frameworkโs toolsetโtailored to the unique demands of healthcareโincludes libraries and tools for DICOM support, 3D image processing, and model pre-training, enabling researchers and clinicians to build high-performance models for tasks like tumor segmentation and organ classification.
A crucial design goal of MONAI was to support on-premises deployment, allowing hospitals to maintain full control over sensitive patient data while leveraging standard GPU servers for training and inference. This ties the frameworkโs performance closely to the data infrastructure beneath it, requiring fast, scalable storage systems to fully support the demands of real-time clinical AI. This is where Solidigm and PEAK:AIO come into play: Solidigm brings high-density flash storage to the table, while PEAK:AIO specializes in storage systems purpose-built for AI workloads.
โWe were very fortunate to be working early on with Kingโs College in London and Professor Sebastien Orslund to develop MONAI,โ Cummings explained. โWorking with Orslund, we developed the underlying infrastructure that allows researchers, doctors, and biologists in the life sciences to build on top of this framework very quickly.โ
Meeting dual storage demands in healthcare AI
Matson pointed out that heโs seeing a clear bifurcation in storage hardware, with different solutions optimized for specific stages of the AI data pipeline. For use cases like MONAI, similar edge AI deploymentsโas well as scenarios involving the feeding of training clustersโultra-high-capacity solid-state storage plays a critical role, as these environments are often space and power-constrained, yet require local access to massive datasets.
For instance, MONAI was able to store more than two million full-body CT scans on a single node within a hospitalโs existing IT infrastructure. โVery space-constrained, power-constrained, and very high-capacity storage enabled some fairly remarkable results,โ Matson said. This kind of efficiency is a game-changer for edge AI in healthcare, allowing institutions to run advanced AI models on-premises without compromising performance, scalability, or data security.
In contrast, workloads involving real-time inference and active model training place very different demands on the system. These tasks require storage solutions that can deliver exceptionally high input/output operations per second (IOPS) to keep up with the data throughput needed by high-bandwidth memory (HBM) and ensure GPUs remain fully utilized. PEAK:AIOโs software-defined storage layer, combined with Solidigmโs high-performance solid-state drives (SSDs), addresses both ends of this spectrumโdelivering the capacity, efficiency, and speed required across the entire AI pipeline.
A software-defined layer for clinical AI workloads at the edge
Cummings explained that PEAK:AIOโs software-defined AI storage technology, when paired with Solidigmโs high-performance SSDs, enables MONAI to read, write, and archive massive datasets at the speed clinical AI demands. This combination accelerates model training and enhances accuracy in medical imaging while operating within an open-source framework tailored to healthcare environments.
โWe provide a software-defined layer that can be deployed on any commodity server, transforming it into a high-performance system for AI or HPC workloads,โ Cummings said. โIn edge environments, we take that same capability and scale it down to a single node, bringing inference closer to where the data lives.โ
A key capability is how PEAK:AIO helps eliminate traditional memory bottlenecks by integrating memory more directly into the AI infrastructure. โWe treat memory as part of the infrastructure itselfโsomething thatโs often overlooked. Our solution scales not just storage, but also the memory workspace and the metadata associated with it,โ Cummings said. This makes a significant difference for customers who canโt affordโeither in terms of space or costโto re-run large models repeatedly. By keeping memory-resident tokens alive and accessible, PEAK:AIO enables efficient, localized inference without needing constant recomputation.
Bringing intelligence closer to the data
Cummings emphasized that enterprises will need to take a more strategic approach to managing AI workloads. โYou canโt be just a destination. You have to understand the workloads. We do some incredible technology with Solidigm and their infrastructure to be smarter on how that data is processed, starting with how to get performance out of a single node,โ Cummings explained. โSo with inference being such a large push, weโre seeing generalists becoming more specialized. And weโre now taking work that weโve done from a single node and pushing it closer to the data to be more efficient. We want more intelligent data, right? The only way to do that is to get closer to that data.โ
Some clear trends are emerging from large-scale AI deployments, particularly in newly built greenfield data centers. These facilities are designed with highly specialized hardware architectures that bring data as close as possible to the GPUs. To achieve this, they rely heavily on all solid-state storageโspecifically ultra-high-capacity SSDsโdesigned to deliver petabyte-scale storage with the speed and accessibility needed to keep GPUs continuously fed with data at high throughput.
โNow that same technology is basically happening at a microcosm, at the edge, in the enterprise,โ Cumming explained. โSo itโs becoming critical to purchasers of AI systems to determine how you select your hardware and system vendor, even to make sure that if you want to get the most performance out of your system, that youโre running on all solid-state. This allows you to bring huge amounts of data, like the MONAI exampleโit was 15,000,000 plus images, in a single system. This enables incredible processing power, right there in a small system at the end.โ
source: https://venturebeat.com/data-infrastructure/the-new-ai-infrastructure-reality-bring-compute-to-data-not-data-to-compute/

