Press release

Molex’s BittWare Adds Open Compute M.2 Accelerator Module to Growing Portfolio of FPGA-based Products

0
Sponsored by Businesswire

BittWare, a Molex company, the leading supplier of enterprise-class NVMe computational storage products featuring FPGA technology, today announces the launch of the 250-M2D Accelerator Module. This FPGA-based Computational Storage Processor (CSP) was designed to meet the new Open Compute M.2 Accelerator Module standard intended to operate in Glacier Point carrier cards for Yosemite servers. These feature-rich, dense servers are favored by hyperscale and cloud companies striving to improve the performance density and energy-efficiency of machine learning platforms.

The 250-M2D features a fully programmable Xilinx® Kintex® UltraScale+™ FPGA directly coupled to two banks of local DDR4 memory. It can be wholly programmed by customers developing in-house capabilities, or delivered as ready-to-run pre-configured solutions featuring application IP from Eideticom, a recognized leader in the fast growing Computational Storage market. These innovative solutions can be purchased directly from BittWare’s parent company, Molex, who services many of the world’s largest data center customers. A version specifically for recommendation models, complete with software for easy integration from deep learning frameworks, is available from Myrtle.ai.

“Our continued investment in NVMe-based Computational Storage products, plus exciting collaborations with application experts such as Myrtle.ai, will help our customers innovate quicker and with lower risk,” said Craig Petrie, vice president marketing at BittWare. “FPGAs offer unique and valuable capabilities. It is essential that there is a supplier who can deliver and support higher-quality volume deployments. BittWare, as a part of Molex, is in the unique position to drive technology advancements while simultaneously delivering enterprise-class product.”

“When co-designing the hardware and software for our deep learning based recommendation model accelerator, SEAL, we selected BittWare, part of the Molex group of companies,” said Peter Baldwin, CEO at Myrtle.ai. “It was a clear choice to take what we’d designed to volume. BittWare had the technical expertise to deliver an enterprise-class product and, with Molex’s global logistics, there was an established channel to support volume hardware for hyperscaler and Tier 1 customers.”

To browse our product portfolio in detail, visit http://www.bittware.com/Storage. The 250-M2D is available to order today, with customer shipments during Q3 2020.

About BittWare

BittWare, a Molex company, provides Enterprise-class compute, network, storage and sensor processing accelerator products featuring Achronix, Intel and Xilinx FPGA technology. These programmable products dramatically increase application performance and energy-efficiency while reducing total cost of ownership. BittWare, with 30 years experience developing FPGA accelerators, is the only FPGA vendor-agnostic supplier of critical mass able to address Enterprise-class qualification, validation, lifecycle and support requirements for customers deploying FPGA accelerators in high volumes.

About Molex

Molex makes a connected world possible by enabling technology that transforms the future and improves lives. With a presence in more than 40 countries, Molex offers a full range of connectivity products, services and solutions for markets that include data communications, medical, industrial, automotive and consumer electronics. For more information, visit www.molex.com.

Molex is a registered trademark of Molex, LLC in the United States of America and may be registered in other countries; all other trademarks listed herein belong to their respective owners.

About Myrtle.ai

Myrtle.ai optimizes inference workloads such as recommendation models, one of the most common data center workloads. This enables businesses to rapidly scale and improve their services while reducing capital costs and energy consumption. For more information, please visit www.myrtle.ai