{"title":"支持网络处理器上的MPI集体通信","authors":"Qianfeng Zhang, C. Keppitiyagama, Alan S. Wagner","doi":"10.1109/CLUSTR.2002.1137731","DOIUrl":null,"url":null,"abstract":"We present work that extends our previous Myrinet port for LAM/MPI, MPI-NP, with collective communication primitives on the NIC. This work is another step in our experiment of making the NIC MPI aware. We believe that an MPI aware control program on the NIC can deliver a richer set of performance enhancements, not just restricted to better bandwidth/latency, to MPI applications. MPI collective communication involves considerable interactions between the communication subsystems of the nodes that are not of any direct interest to the application. By migrating these talkative components to the Myrinet network interface card we allow this dialog between the nodes to happen with minimum latency. We explore the advantage of supporting several MPI collective communication routines on the NIC. These include MPI /spl I.bar/Bcast (), MPI/spl I.bar/Barrier and MPI/spl I.bar/Comm/spl I.bar/Create ().","PeriodicalId":92128,"journal":{"name":"Proceedings. IEEE International Conference on Cluster Computing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2002-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":"{\"title\":\"Supporting MPI collective communication on network processors\",\"authors\":\"Qianfeng Zhang, C. Keppitiyagama, Alan S. Wagner\",\"doi\":\"10.1109/CLUSTR.2002.1137731\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"We present work that extends our previous Myrinet port for LAM/MPI, MPI-NP, with collective communication primitives on the NIC. This work is another step in our experiment of making the NIC MPI aware. We believe that an MPI aware control program on the NIC can deliver a richer set of performance enhancements, not just restricted to better bandwidth/latency, to MPI applications. MPI collective communication involves considerable interactions between the communication subsystems of the nodes that are not of any direct interest to the application. By migrating these talkative components to the Myrinet network interface card we allow this dialog between the nodes to happen with minimum latency. We explore the advantage of supporting several MPI collective communication routines on the NIC. These include MPI /spl I.bar/Bcast (), MPI/spl I.bar/Barrier and MPI/spl I.bar/Comm/spl I.bar/Create ().\",\"PeriodicalId\":92128,\"journal\":{\"name\":\"Proceedings. IEEE International Conference on Cluster Computing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2002-09-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"4\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings. IEEE International Conference on Cluster Computing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1109/CLUSTR.2002.1137731\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings. IEEE International Conference on Cluster Computing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/CLUSTR.2002.1137731","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Supporting MPI collective communication on network processors
We present work that extends our previous Myrinet port for LAM/MPI, MPI-NP, with collective communication primitives on the NIC. This work is another step in our experiment of making the NIC MPI aware. We believe that an MPI aware control program on the NIC can deliver a richer set of performance enhancements, not just restricted to better bandwidth/latency, to MPI applications. MPI collective communication involves considerable interactions between the communication subsystems of the nodes that are not of any direct interest to the application. By migrating these talkative components to the Myrinet network interface card we allow this dialog between the nodes to happen with minimum latency. We explore the advantage of supporting several MPI collective communication routines on the NIC. These include MPI /spl I.bar/Bcast (), MPI/spl I.bar/Barrier and MPI/spl I.bar/Comm/spl I.bar/Create ().