WEKApod Nitro and WEKApod Prime Present Prospects with Versatile, Reasonably priced, Scalable Options to Quick-Monitor AI Innovation
WekaIO (WEKA), the AI-native knowledge platform firm, unveiled two new WEKApod™ knowledge platform applianced: the WEKApod Nitro for large-scale enterprise AI deployments and the WEKApod Prime for smaller-scale AI deployments and multi-purpose high-performance knowledge use instances. WEKApod knowledge platform home equipment present turnkey options combining WEKA® Knowledge Platform software program with best-in-class high-performance {hardware} to supply a strong knowledge basis for accelerated AI and trendy performance-
intensive workloads.
The WEKA Knowledge Platform delivers scalable AI-native knowledge infrastructure purpose-built for even essentially the most demanding AI workloads, accelerating GPU utilization and retrieval-augmented era (RAG) knowledge pipelines effectively and sustainably whereas offering environment friendly write efficiency for AI mannequin checkpointing. Its superior cloud-native structure allows final deployment flexibility, seamless knowledge portability, and strong hybrid cloud functionality.
WEKApod delivers all of the capabilities and advantages of WEKA Knowledge Platform software program in an easy-to-deploy equipment preferrred for organizations leveraging generative AI and different performance-intensive workloads throughout a broad spectrum of industries. Key advantages embody:
WEKApod Nitro: Delivers distinctive efficiency density at scale, delivering over 18 million IOPS in a single cluster, making it preferrred for large-scale enterprise AI deployments and AI resolution suppliers coaching, tuning, and inferencing LLM basis fashions. WEKApod Nitro is licensed for NVIDIA DGX SuperPOD™. Capability begins at half a petabyte of usable knowledge and is expandable in half-petabyte increments.
WEKApod Prime: Seamlessly handles high-performance knowledge throughput for HPC, AI coaching and inference, making it preferrred for organizations that wish to scale their AI infrastructure whereas sustaining value effectivity and balanced price-performance. WEKApod Prime provides versatile configurations that scale as much as 320 GB/s learn bandwidth, 96 GB/s write bandwidth, and as much as 12 million IOPS for patrons with much less excessive efficiency knowledge processing necessities. This allows organizations to customise configurations with elective add-ons, in order that they solely pay for what they want and keep away from overprovisioning pointless elements. Capability begins at 0.4PB of usable knowledge with choices extending as much as 1.4PB.
“Accelerated adoption of generative AI purposes and multi-modal retrieval-augmented era has permeated the enterprise quicker than anybody may have predicted, driving the necessity for inexpensive, highly-performant and versatile knowledge infrastructure options that ship extraordinarily low latency, drastically cut back the fee per tokens generated and may scale to satisfy the present and future wants of organizations as their AI initiatives evolve,” stated Nilesh Patel, chief product officer at WEKA. “WEKApod Nitro and WEKApod Prime supply unparalleled flexibility and selection whereas delivering distinctive efficiency, vitality effectivity, and worth to speed up their AI initiatives anyplace and in all places they want them to run.”
Join the free insideAI Information newsletter.
Be a part of us on Twitter: https://twitter.com/InsideBigData1
Be a part of us on LinkedIn: https://www.linkedin.com/company/insideainews/
Be a part of us on Fb: https://www.facebook.com/insideAINEWSNOW
Verify us out on YouTube!