Things are looking well for Israel’s Mellanox Technologies, Ltd., a supplier of high-performance, end-to-end interconnect solutions for data center servers and storage systems. In the past week the company has been selected to take part in a $425 million project funded by the U.S. Department of energy to construct supercomputers, had its tech chosen by Minnesota Supercomputing Institute at the University of Minnesota, and today it announced the development of the world’s fastest EDR 100Gb/s InfiniBand switches.
The American project will be conducted at two of the country’s premier national labs Oak Ridge National Laboratory and Lawrence Livermore National Laboratory together with IBM and NVIDIA.
Will you offer us a hand? Every gift, regardless of size, fuels our future.
Your critical contribution enables us to maintain our independence from shareholders or wealthy owners, allowing us to keep up reporting without bias. It means we can continue to make Jewish Business News available to everyone.
You can support us for as little as $1 via PayPal at [email protected].
Thank you.
Mellanox’s EDR 100Gb/s solutions were selected as key components of new supercomputers at the two facilities. The hybrid supercomputer design will interconnect thousands of compute nodes containing both IBM POWER CPUs and NVIDIA GPUs via Mellanox’s EDR 100Gb/s InfiniBand-based solutions, providing one of the most advanced architectures of its kind for high-performance computing applications, the company said.
“Organizations and research facilities are required to process and analyze more information than ever before and to do it in less time, ” said Michael Kagan, CTO, Mellanox Technologies. “Mellanox interconnect solutions deliver the highest performance and scalability, and provide the most advanced roadmap that paves the road to Exascale computing. We are excited to collaborate with IBM, NVIDIA, ORNL and LLNL to build the most advanced supercomputers in the world.”
Also its Switch-IB EDR 100Gb/s InfiniBand switch systems will be integrated into the HP-deployed, 712-node supercomputer at the Minnesota Supercomputing Institute at the University of Minnesota, making it the first EDR 100Gb/s InfiniBand large-scale cluster in the United States.
“Being a part of the first U.S. EDR 100Gb/s InfiniBand large-scale cluster gives the company great pride, ” said Gilad Shainer, vice president of marketing at Mellanox Technologies. “Switch-IB’s low latency, unparalleled bandwidth, and removal of congestion and bottlenecks on the network enable MSI to increase application performance while dramatically reducing their operational expenses.”
Finally, the company set a world record for port-to-port latency of less than 90ns. Switch-IB has 36-ports of 100Gb/s to provide 7.2Tb/s of switching capacity and ultra-low latency and power consumption. Compared to the previous generation of InfiniBand switches, Switch-IB delivers nearly twice the throughput per port with half the latency.
Compared to the previous generation of InfiniBand switches, Switch-IB delivers nearly twice the throughput per port with half the latency, boasts Mellanox.