PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...最新文献
Pub Date : 2021-02-23DOI: 10.21608/BFEMU.2021.150965
H. Elsimary
{"title":"Enhancement of Neural Networks Novelty Filters with Genetic Algorithms","authors":"H. Elsimary","doi":"10.21608/BFEMU.2021.150965","DOIUrl":"https://doi.org/10.21608/BFEMU.2021.150965","url":null,"abstract":"","PeriodicalId":93135,"journal":{"name":"PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...","volume":"25 1","pages":"924-927"},"PeriodicalIF":0.0,"publicationDate":"2021-02-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81263204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Time-Technology","authors":"Balan Subramanian","doi":"10.2307/j.ctv13qfvn4.9","DOIUrl":"https://doi.org/10.2307/j.ctv13qfvn4.9","url":null,"abstract":"","PeriodicalId":93135,"journal":{"name":"PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...","volume":"107 1","pages":"1527-1534"},"PeriodicalIF":0.0,"publicationDate":"2020-07-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73224545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
N Seekhao, G Yu, S Yuen, J JaJa, L Mongeau, N Y K Li-Jessen
High-fidelity numerical simulations produce massive amounts of data. Analyzing these numerical data sets as they are being generated provides useful insights into the processes underlying the modeled phenomenon. However, developing real-time in-situ visualization techniques to process large amounts of data can be challenging since the data does not fit on the GPU, thus requiring expensive CPU-GPU data copies. In this work, we present a scheduling scheme that achieve real-time simulation and interactivity through GPU hyper-tasking. Furthermore, the CPU-GPU communications were minimized using an activity-aware technique to reduce redundant copies. Our simulation platform is capable of visualizing 1.7 billion protein data points in situ, with an average frame rate of 42.8 fps. This performance allows users to explore large data sets on remote server with real-time interactivity as they are performing their simulations.
{"title":"High-Performance Host-Device Scheduling and Data-Transfer Minimization Techniques for Visualization of 3D Agent-Based Wound Healing Applications.","authors":"N Seekhao, G Yu, S Yuen, J JaJa, L Mongeau, N Y K Li-Jessen","doi":"","DOIUrl":"","url":null,"abstract":"<p><p>High-fidelity numerical simulations produce massive amounts of data. Analyzing these numerical data sets as they are being generated provides useful insights into the processes underlying the modeled phenomenon. However, developing real-time in-situ visualization techniques to process large amounts of data can be challenging since the data does not fit on the GPU, thus requiring expensive CPU-GPU data copies. In this work, we present a scheduling scheme that achieve real-time simulation and interactivity through GPU hyper-tasking. Furthermore, the CPU-GPU communications were minimized using an activity-aware technique to reduce redundant copies. Our simulation platform is capable of visualizing 1.7 billion protein data points in situ, with an average frame rate of 42.8 fps. This performance allows users to explore large data sets on remote server with real-time interactivity as they are performing their simulations.</p>","PeriodicalId":93135,"journal":{"name":"PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...","volume":"2019 ","pages":"69-76"},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7592707/pdf/nihms-1054505.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"38546808","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"A Comparative Xeon and CBE Performance Analysis","authors":"R. Fort, Robert Chun","doi":"10.31979/etd.j8y4-xxqw","DOIUrl":"https://doi.org/10.31979/etd.j8y4-xxqw","url":null,"abstract":"","PeriodicalId":93135,"journal":{"name":"PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...","volume":"13 1","pages":"478-484"},"PeriodicalIF":0.0,"publicationDate":"2008-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83665781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2006-01-01DOI: 10.5220/0001517000930100
G. Wells
This paper describes a new tuple space web service for coordination and communication in distributed web applications. This web service is based on the Linda programming model. Linda is a coordination language for parallel and distributed processing, providing a communication mechanism based on a logically shared memory space. The original Linda model has been extended through the provision of a programmable mechanism, providing additional flexibility and improved performance. The implementation of the web service is discussed, together with the details of the programmable matching mechanism. Some results from the implementation of a location-based mobile application, using the tuple space web service are presented, demonstrating the benefits of our system.
{"title":"A Tuple Space Web Service for Distributed Programming","authors":"G. Wells","doi":"10.5220/0001517000930100","DOIUrl":"https://doi.org/10.5220/0001517000930100","url":null,"abstract":"This paper describes a new tuple space web service for coordination and communication in distributed web applications. This web service is based on the Linda programming model. Linda is a coordination language for parallel and distributed processing, providing a communication mechanism based on a logically shared memory space. The original Linda model has been extended through the provision of a programmable mechanism, providing additional flexibility and improved performance. The implementation of the web service is discussed, together with the details of the programmable matching mechanism. Some results from the implementation of a location-based mobile application, using the tuple space web service are presented, demonstrating the benefits of our system.","PeriodicalId":93135,"journal":{"name":"PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...","volume":"70 1","pages":"93-100"},"PeriodicalIF":0.0,"publicationDate":"2006-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88493364","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose some PAC like settings for a learning problem of a sub-class of linear languages, and show its polynomial time learnability in each of our settings. Here, the sub-class of linear languages is newly defined, and it includes the class of regular languages and the class of even linear languages. We show a polynomial time learning algorithm in either of the following settings with a fixed but unknown probability distribution for examples.(1) The first case is when the learner can use randomly drawn examples, membership queries, and a set of representative samples.(2) The second case is when the learner can use randomly drawn examples, membership queries, and both of the size of a grammar which can generate the target language and d. Where d is the probability such that the rarest rule in the target grammar occurs in the derivation of a randomly drawn example. In each case, for the target language Lt, the hypothesis Lhsatisfies thatPr[P(Lh Δ Lt) ≤ e] ≥ 1 - δ for the error parameter 0 < e ≤ 1 and the confidential parameter 0 < δ ≤ 1.
{"title":"Polynomial Time PAC Learnability of a Sub-class of Linear Languages","authors":"Y. Tajima, Y. Kotani, M. Terada","doi":"10.2197/IPSJDC.1.643","DOIUrl":"https://doi.org/10.2197/IPSJDC.1.643","url":null,"abstract":"We propose some PAC like settings for a learning problem of a sub-class of linear languages, and show its polynomial time learnability in each of our settings. Here, the sub-class of linear languages is newly defined, and it includes the class of regular languages and the class of even linear languages. We show a polynomial time learning algorithm in either of the following settings with a fixed but unknown probability distribution for examples.(1) The first case is when the learner can use randomly drawn examples, membership queries, and a set of representative samples.(2) The second case is when the learner can use randomly drawn examples, membership queries, and both of the size of a grammar which can generate the target language and d. Where d is the probability such that the rarest rule in the target grammar occurs in the derivation of a randomly drawn example. In each case, for the target language Lt, the hypothesis Lhsatisfies thatPr[P(Lh Δ Lt) ≤ e] ≥ 1 - δ for the error parameter 0 < e ≤ 1 and the confidential parameter 0 < δ ≤ 1.","PeriodicalId":93135,"journal":{"name":"PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...","volume":"1 1","pages":"338-344"},"PeriodicalIF":0.0,"publicationDate":"2005-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89653111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Semantics Based Web Services Discovery","authors":"Shou-jian Yu, Jing-zhou Zhang, Xiao-Kun Ge, Guowen Wu","doi":"10.1007/978-3-540-30483-8_47","DOIUrl":"https://doi.org/10.1007/978-3-540-30483-8_47","url":null,"abstract":"","PeriodicalId":93135,"journal":{"name":"PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...","volume":"16 1","pages":"388-393"},"PeriodicalIF":0.0,"publicationDate":"2004-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74531397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2002-06-24DOI: 10.1142/9789812775368_0016
S. Saito
{"title":"A Genetic Algorithm by Use of Virus Evolutionary Theory for Combinatorial Problems","authors":"S. Saito","doi":"10.1142/9789812775368_0016","DOIUrl":"https://doi.org/10.1142/9789812775368_0016","url":null,"abstract":"","PeriodicalId":93135,"journal":{"name":"PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...","volume":"50 1","pages":"222-227"},"PeriodicalIF":0.0,"publicationDate":"2002-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78843303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bidirectional associative memory (BAM) is a kind of an artificial neural network used to memorize and retrieve heterogeneous pattern pairs. Many efforts have been made to improve BAM from the the viewpoint of computer application, and few theoretical studies have been done. We investigated the theoretical characteristics of BAM using a framework of statistical–mechanical analysis. To investigate the equilibrium state of BAM, we applied self-consistent signal to noise analysis (SCSNA) and obtained a macroscopic parameter equations and relative capacity. Moreover, to investigate not only the equilibrium state but also the retrieval process of reaching the equilibrium state, we applied statistical neurodynamics to the update rule of BAM and obtained evolution equations for the macroscopic parameters. These evolution equations are consistent with the results of SCSNA in the equilibrium state.
{"title":"Analysis of Bidirectional Associative Memory Using SCSNA and Statistical Neurodynamics","authors":"Hayaru Shouno, M. Okada","doi":"10.1143/JPSJ.73.2406","DOIUrl":"https://doi.org/10.1143/JPSJ.73.2406","url":null,"abstract":"Bidirectional associative memory (BAM) is a kind of an artificial neural network used to memorize and retrieve heterogeneous pattern pairs. Many efforts have been made to improve BAM from the the viewpoint of computer application, and few theoretical studies have been done. We investigated the theoretical characteristics of BAM using a framework of statistical–mechanical analysis. To investigate the equilibrium state of BAM, we applied self-consistent signal to noise analysis (SCSNA) and obtained a macroscopic parameter equations and relative capacity. Moreover, to investigate not only the equilibrium state but also the retrieval process of reaching the equilibrium state, we applied statistical neurodynamics to the update rule of BAM and obtained evolution equations for the macroscopic parameters. These evolution equations are consistent with the results of SCSNA in the equilibrium state.","PeriodicalId":93135,"journal":{"name":"PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...","volume":"90 1","pages":"239-245"},"PeriodicalIF":0.0,"publicationDate":"2002-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83530658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2000-05-01DOI: 10.1007/3-540-45591-4_139
Antônio Carlos Lima de Santana, L. H. C. Branco, A. F. Prado, W. L. Souza, Marcelo Sant'Anna
{"title":"Automatic Implementation of Distributed Systems Formal Specifications","authors":"Antônio Carlos Lima de Santana, L. H. C. Branco, A. F. Prado, W. L. Souza, Marcelo Sant'Anna","doi":"10.1007/3-540-45591-4_139","DOIUrl":"https://doi.org/10.1007/3-540-45591-4_139","url":null,"abstract":"","PeriodicalId":93135,"journal":{"name":"PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...","volume":"17 1","pages":"1424-1429"},"PeriodicalIF":0.0,"publicationDate":"2000-05-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89385435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
PDPTA '19 : proceedings of the 2019 International Conference on Parallel & Distributed Processing Techniquess & Applications. International Conference on Parallel and Distributed Processing Techniques and Applications (2019 : Las Vegas,...