From hpcwire@tgc.com Fri Jan 17 06:36:53 2003 Delivered-To: hanson@math.uic.edu Date: Thu, 16 Jan 2003 22:36:52 -0800 (PST) From: HPCwire To: hanson@math.uic.edu Subject: 73566 140 of the November 2002 TOP500 use Myrinet Content-Length: 7992 140 of the November 2002 TOP500 use Myrinet ------------------------------------------------------------------------------ Recent TOP500™ lists show a large growth in clusters, and widespread use of Myrinet(TM) technology. Myricom's Myrinet cluster-interconnect technology made another excellent showing in the November 2002 TOP500 list. This list includes 28 Beowulf-style Myrinet clusters and an additional 112 HP Superdome/Hyperplex systems. HyperFabric(TM) and Hyperplex(TM) are HP brands for Myrinet communication between HP Superdome(TM) SMP hosts. A total of 140 or 28% of the November TOP500 supercomputer sites use Myrinet technology. Fifteen of the top 100 systems are Beowulf-style Myrinet clusters, led by a 768- node, 1536-processor system at the Forecast System Laboratory, NOAA, ranking #8 at 3337 Gflops. "The Myricom technical team is very pleased with the concrete evidence from recent TOP500 lists that clusters and Myrinet are having a growing and positive impact on high-performance technical computing," commented Chuck Seitz, CEO and CTO of Myricom. The TOP500 list (www.top500.org), published twice a year at the ISC conferences in June and at the SC conferences in November, ranks supercomputers worldwide according to their performance on the LINPACK benchmark. This 20th TOP500 list was published on 15 November prior to SC2002 in Baltimore. The TOP500 authors also maintain a special list for clusters, "Clusters@TOP500," clusters.top500.org. The TOP500 list nearly defines what is meant by a "supercomputer" and has revealed some interesting trends throughout the last decade. Clusters are now the fastest growing category of supercomputers in the TOP500. Clusters started to appear in the TOP500 list in 1997 with the Berkeley NOW (Network of Workstations), a Myrinet cluster of 100 UltraSPARC-I computers that ranked #479 in the June 1997 TOP500 list with a LINPACK performance of 10.14 Gflops. According to the classification used by the TOP500 authors, there are 93 clusters (18.6%) in the November 2002 list, up from 80 clusters in the June 2002 list, 43 clusters in the November 2001 list, and 33 clusters in the June 2001 list. Although nearly all of the TOP500 are distributed-memory systems, the TOP500 authors reserve the cluster classification for systems in which the interconnect is not proprietary. Thus, a system such as the IBM SP, which is a cluster architecture that uses IBM-proprietary interconnect, is classified as an MPP rather than as a cluster. Here is a rundown on the 15 Beowulf-style Myrinet clusters in the top 100 of the November 2002 TOP500 list. All 15 of these systems use Linux(TM), and most of the newer and higher ranked systems use Intel(TM) Xeon(TM) processors. This list is interesting for the wide geographic distribution of these clusters, their variety of purposes, and the diversity of suppliers, including five self-made clusters. #8 at 3337 Gflops is a 768-node cluster of dual 2.2GHz Xeon hosts at the Forecast Systems Laboratory of NOAA. HPTi and Aspen Systems were the integrators. This newest Myrinet/Linux cluster at FSL, like its predecessors, is used for weather forecasting. #17 at 2207 Gflops is the 512-node "SuperMike" cluster of 1.8GHz dual Xeon hosts at Louisiana State University (LSU). SuperMike was named for Louisiana's Governor M. J. "Mike" Foster, Jr., who was instrumental in starting an IT initiative to upgrade the state's computing resources for research and education. This Myrinet/Linux cluster was supplied by Atipa Technologies. #22 at 2004 Gflops is a 300-node Myrinet/Linux cluster of Dell PowerEdge 2650 (2.4GHz dual Xeon) servers at the University of Buffalo, SUNY, Center for Computational Research. This cluster's LINPACK performance is an unusually high fraction of the Xeon's peak performance due to the use of BLAS specially coded for the Xeon. The OEM/Integrator for this cluster was Dell. #32 at 1272 Gflops is the Vplant cluster supplied by Dell to Sandia National Laboratory in Albuquerque. This Myrinet/Linux cluster of 330 dual 2.0GHz and 2.4GHz Xeons is used for visualization, but it also has plenty of compute performance. #43 at 1046 Gflops is a 256-node dual 2.0GHz Xeon cluster at the Academy of Mathematics and Systems Science in Beijing, China. The Legend Group was the integrator of this Myrinet/Linux cluster. #46 at 1007 Gflops is the new, LCRC cluster at Argonne National Laboratory. This Myrinet/Linux cluster of 361 2.4GHz Xeon processors was supplied by Linux NetworX. #48 at 996.9 Gflops is the Cplant™ cluster at Sandia National Laboratory in Albuquerque. This pioneering, self-made, Myrinet/Linux cluster used 1,800 single-processor Alpha hosts for this TOP500 benchmark. CPlant first appeared in the TOP500 in the November 1998 list, ranked #97 at 54.24 Gflops achieved with 150 Alphas. #64 at 825 Gflops is the HELICS cluster at the University of Heidelberg, Germany, Interdisciplinary Center for Scientific Computing (IWR). This cluster of 256 dual 1.4GHz Athlon(TM) hosts (512 processors) was installed by Megware in March 2002. At #35, HELICS was the top Myrinet cluster in the June 2002 TOP500 list. (All of the clusters listed above are new or expanded since the June 2002 TOP500 list.) #68 at 760.2 Gflops is the self-made Presto III cluster at TITECH (Tokyo Institute of Technology). This constantly growing Myrinet/Linux cluster now has 496 Athlon processors at up to 1.6GHz. #74 at 734.6 Gflops is the MVS1000M cluster at the Joint Supercomputer Center in Moscow. This self-made Myrinet/Linux cluster of 384 dual 667MHz Alpha EV- 67s is the most powerful computer in Russia. This same system ranked #64 at 564 Gflops in the June 2002 list, but its performance has increased thanks to software optimizations. #82 at 677.9 Gflops is the Titan cluster at NCSA. IBM was the OEM/integrator for this Myrinet/Linux cluster of 160 IBM dual 800MHz Itanium-1™ hosts (320 processors). Titan first appeared in the November 2001 TOP500 list ranking #34. #86 at 654 Gflops is the Magi cluster at the Tsukuba, Japan, Advanced Computing Center. This Myrinet/Linux cluster of 520 NEC dual 933MHz Pentium- III hosts (1040 processors) was integrated by NEC in 2001, and, like its sister system below, runs SCore software developed at RWCP. #90 at 618.3 Gflops is the SCore IIIe cluster at the Real World Computing Partnership, Tsukuba, Japan. This self-made Myrinet/Linux cluster of 512 NEC dual 933MHz Pentium-III hosts (1024 processors) was installed in 2001, and ranked #36 in the June 2001 TOP500 list. #91 at 594 Gflops is the Platinum Netfinity cluster at NCSA. This Myrinet/Linux cluster of 512 IBM dual 1GHz Pentium-III hosts (1024 processors) was installed by IBM in 2001, and ranked #30 in the June 2001 TOP500 list. To see the benefits of Myrinet versus Ethernet for the relatively tightly coupled LINPACK computation, compare Platinum with the otherwise identical ethernet clusters ranking #221 - #224 in the November 2002 TOP500. When the interprocess communication is limited by ethernet, these ethernet clusters achieved 289 Gflops, less than half the performance of Platinum, a Myrinet cluster. #95 at 575 Gflops is a self-made Myrinet/Linux cluster of 128 Appro 1200X dual 2.2GHz Xeon hosts at the Naval Research Laboratory in Washington DC. One new system that just barely missed the deadline for the November 2002 TOP500 entries is a new, HP, 128-host, dual 900MHz Itanium-2™ cluster at Ohio Supercomputer Center (www.osc.edu). The 761.98 Gflops LINPACK performance of this Myrinet/Linux cluster is a superb 82.68% of peak, a great demonstration of the capabilities of Itanium-2 systems in clusters with Myrinet components and software. **************************************************************************** HPCwire has released all copyright restrictions for this item. Please feel free to distribute this article to your friends and colleagues. For a free trial subscription, send e-mail to trial@hpcwire.tgc.com.