Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00148
Tyler McGrew, Eric Schonauer
Arguably, each computer engineer undergrad should build a simple processor in the pursuit of their degree to help them internalize the basic design principles and properties of a computer. With the proliferation of FPGAs in universities this is, easily, realizable in most undergraduate curricula. Many modern courses on computer architecture or organization rely on MIPS architectures (among others) as the base processor to learn with, but the MIPS architecture has little commercial success and real-world implementations that will allow students to get additional career benefit from building and learning about a used architecture. The increasing industrial interest of RISCV ISA, its free availability, and its early success in real-world adoption makes this processor a great potential candidate in this educational space. This work provides suggestions on how undergraduates should build a RISC-V architecture on an FPGA, and a basic framework of tools and design principles for this exercise.
{"title":"Framework and Tools for Undergraduates Designing RISC-V Processors on an FPGA in Computer Architecture Education","authors":"Tyler McGrew, Eric Schonauer","doi":"10.1109/CSCI49370.2019.00148","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00148","url":null,"abstract":"Arguably, each computer engineer undergrad should build a simple processor in the pursuit of their degree to help them internalize the basic design principles and properties of a computer. With the proliferation of FPGAs in universities this is, easily, realizable in most undergraduate curricula. Many modern courses on computer architecture or organization rely on MIPS architectures (among others) as the base processor to learn with, but the MIPS architecture has little commercial success and real-world implementations that will allow students to get additional career benefit from building and learning about a used architecture. The increasing industrial interest of RISCV ISA, its free availability, and its early success in real-world adoption makes this processor a great potential candidate in this educational space. This work provides suggestions on how undergraduates should build a RISC-V architecture on an FPGA, and a basic framework of tools and design principles for this exercise.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133930772","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00222
A. Abu-El Humos, H. Shih, M. Hasan, A. Eldek
This work aims to create a platform capable of transmitting data from underwater environment of the Mississippi Sound directly to the cloud and in real time. This platform will then house different sensors allowing users to have real time information on the status of the underwater environment. The proposed platform will be designed to allow two-way communications. Hence, the user may change the rate at which data is transmitted as well as when the platform becomes visible for retrieval The platform will house a power storage unit capable of supporting transmission of data on the cellular system for a period greater than one month. This period is expected to increase as system design is refined. The system will have minimal visibility on the water surface, eliminating the possibility of vandalism. This will be achieved by designing a special antenna that will break the surface for transmission and be retracted otherwise. The system will also be equipped with an Underwater Timed Release (float release) mechanism. This mechanism will allow a float to be released at a predetermined time. This time can be modified anytime the platform is in transmission mode. This platform will then be used with our gape measurement sensor system, allowing researchers to observe oyster gaping in real time. Since it has already been established that oyster gaping can be used to gauge the health of the environment, this system will create a real time monitor of the environmental health of the Mississippi Sound. Finally, an Artificial Intelligence (AI) will be developed and trained to read in data from this platform and issue alarms accordingly.
{"title":"Real Time Environmental/Biological Monitoring System","authors":"A. Abu-El Humos, H. Shih, M. Hasan, A. Eldek","doi":"10.1109/CSCI49370.2019.00222","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00222","url":null,"abstract":"This work aims to create a platform capable of transmitting data from underwater environment of the Mississippi Sound directly to the cloud and in real time. This platform will then house different sensors allowing users to have real time information on the status of the underwater environment. The proposed platform will be designed to allow two-way communications. Hence, the user may change the rate at which data is transmitted as well as when the platform becomes visible for retrieval The platform will house a power storage unit capable of supporting transmission of data on the cellular system for a period greater than one month. This period is expected to increase as system design is refined. The system will have minimal visibility on the water surface, eliminating the possibility of vandalism. This will be achieved by designing a special antenna that will break the surface for transmission and be retracted otherwise. The system will also be equipped with an Underwater Timed Release (float release) mechanism. This mechanism will allow a float to be released at a predetermined time. This time can be modified anytime the platform is in transmission mode. This platform will then be used with our gape measurement sensor system, allowing researchers to observe oyster gaping in real time. Since it has already been established that oyster gaping can be used to gauge the health of the environment, this system will create a real time monitor of the environmental health of the Mississippi Sound. Finally, an Artificial Intelligence (AI) will be developed and trained to read in data from this platform and issue alarms accordingly.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131560640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00206
Ding Li, Yuefei Zhu, Wei Lin, Yan Chen
Mobile app traffic now accounts for a majority owing to the booming mobile devices and mobile apps. State-of-the-art identification methods, such as DPI and flow-based classifiers, have difficulties in designing features and labeling samples manually. Motivated by the excellence of CNNs in visual object recognition, we propose convolutional autoencoder network (CAEN), a deep learning approach to mobile app traffic identification. Our contributions are two-fold. First, we propose a novel method of converting traffic flows into vision-meaningful images, and thus enable the machine to identify the traffic in a human way. Based on the method, we create an open dataset named IMTD. Second, convolutional autoencoder (CAE) algorithm is introduced into the proposed network model, realizing the automatic feature extraction and the learning from massive unlabeled samples. The experimental results show that the identification accuracy of our approach can reach 99.5%, which satisfies the practical requirement.
{"title":"CAEN: A Deep Learning Approach to Mobile App Traffic Identification","authors":"Ding Li, Yuefei Zhu, Wei Lin, Yan Chen","doi":"10.1109/CSCI49370.2019.00206","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00206","url":null,"abstract":"Mobile app traffic now accounts for a majority owing to the booming mobile devices and mobile apps. State-of-the-art identification methods, such as DPI and flow-based classifiers, have difficulties in designing features and labeling samples manually. Motivated by the excellence of CNNs in visual object recognition, we propose convolutional autoencoder network (CAEN), a deep learning approach to mobile app traffic identification. Our contributions are two-fold. First, we propose a novel method of converting traffic flows into vision-meaningful images, and thus enable the machine to identify the traffic in a human way. Based on the method, we create an open dataset named IMTD. Second, convolutional autoencoder (CAE) algorithm is introduced into the proposed network model, realizing the automatic feature extraction and the learning from massive unlabeled samples. The experimental results show that the identification accuracy of our approach can reach 99.5%, which satisfies the practical requirement.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133279660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00078
J. Mange
For many Machine Learning algorithms on supervised learning problems, the order of training data samples can affect the quality of the derived model and the accuracy of predictions. This paper describes a project to quantify this effect, and to statistically quantify the variation exhibited by several algorithms using permutations of a given training data set. It is demonstrated that this variation can be quite significant, and that training data set ordering should be an important consideration when approaching a classification task.
{"title":"Effect of Training Data Order for Machine Learning","authors":"J. Mange","doi":"10.1109/CSCI49370.2019.00078","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00078","url":null,"abstract":"For many Machine Learning algorithms on supervised learning problems, the order of training data samples can affect the quality of the derived model and the accuracy of predictions. This paper describes a project to quantify this effect, and to statistically quantify the variation exhibited by several algorithms using permutations of a given training data set. It is demonstrated that this variation can be quite significant, and that training data set ordering should be an important consideration when approaching a classification task.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128862228","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00178
A. Zvironas, M. Gudauskis
Visually impaired persons need electronic traveling aids (ETA) for detection and recognition of obstacles, and navigation to desired destinations not only in outdoor but also in the indoor environments as well. Without clear GPS signals, it is technologically challenging, however. This paper provides a brief systemic overview and evaluation of current technological R&D approaches for indoor navigation. We assessed the selected indoor navigation prototypes estimating navigation technologies, sensors, computational devices, and feedback type. The evaluation and comparison of the state-of-the-art indoor navigation solutions and research implications provide the summary of observations, which are critically assessed. Our systemic review also provides some technological clues for the developers.
{"title":"Indoor Electronic Traveling Aids for Visually Impaired: Systemic Review","authors":"A. Zvironas, M. Gudauskis","doi":"10.1109/CSCI49370.2019.00178","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00178","url":null,"abstract":"Visually impaired persons need electronic traveling aids (ETA) for detection and recognition of obstacles, and navigation to desired destinations not only in outdoor but also in the indoor environments as well. Without clear GPS signals, it is technologically challenging, however. This paper provides a brief systemic overview and evaluation of current technological R&D approaches for indoor navigation. We assessed the selected indoor navigation prototypes estimating navigation technologies, sensors, computational devices, and feedback type. The evaluation and comparison of the state-of-the-art indoor navigation solutions and research implications provide the summary of observations, which are critically assessed. Our systemic review also provides some technological clues for the developers.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115584619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00060
F. Mohammadi, M. Amini, H. Arabnia
Learning to learn plays a pivotal role in meta-learning (MTL) to obtain an optimal learning model. In this paper, we investigate image recognition for unseen categories of a given dataset with limited training information. We deploy a zero-shot learning (ZSL) algorithm to achieve this goal. We also explore the effect of parameter tuning on performance of semantic auto-encoder (SAE). We further address the parameter tuning problem for meta-learning, especially focusing on zero-shot learning. By combining different embedded parameters, we improved the accuracy of tuned-SAE. Advantages and disadvantages of parameter tuning and its application in image classification are also explored.
{"title":"On Parameter Tuning in Meta-Learning for Computer Vision","authors":"F. Mohammadi, M. Amini, H. Arabnia","doi":"10.1109/CSCI49370.2019.00060","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00060","url":null,"abstract":"Learning to learn plays a pivotal role in meta-learning (MTL) to obtain an optimal learning model. In this paper, we investigate image recognition for unseen categories of a given dataset with limited training information. We deploy a zero-shot learning (ZSL) algorithm to achieve this goal. We also explore the effect of parameter tuning on performance of semantic auto-encoder (SAE). We further address the parameter tuning problem for meta-learning, especially focusing on zero-shot learning. By combining different embedded parameters, we improved the accuracy of tuned-SAE. Advantages and disadvantages of parameter tuning and its application in image classification are also explored.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115803320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00284
Louis Zhang, Qijun Zhang
An approach combining genetic programming (GP), neural network and electrical knowledge equations is presented for electronic device modeling. The proposed model includes a GP-generated symbolic function accurately representing device behavior within the training range, and a knowledge equation providing reliable tendencies of electronic behavior outside the training range. A correctional neural network is trained to align the knowledge equations with the GP-generated symbolic functions at the boundary of training data. The proposed method is more robust than the GP-generated symbolic functions alone because of improved extrapolation ability, and more accurate than the knowledge equations alone because of the genetic program's ability to learn non-ideal relationships inherent in the practical data. The method is demonstrated by applying it to a practical high-frequency, high-power transistor called a HEMT (High-Electron Mobility Transistor) used in wireless transmitters.
{"title":"Combined Genetic Programming and Neural Network Approaches to Electronic Modeling","authors":"Louis Zhang, Qijun Zhang","doi":"10.1109/CSCI49370.2019.00284","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00284","url":null,"abstract":"An approach combining genetic programming (GP), neural network and electrical knowledge equations is presented for electronic device modeling. The proposed model includes a GP-generated symbolic function accurately representing device behavior within the training range, and a knowledge equation providing reliable tendencies of electronic behavior outside the training range. A correctional neural network is trained to align the knowledge equations with the GP-generated symbolic functions at the boundary of training data. The proposed method is more robust than the GP-generated symbolic functions alone because of improved extrapolation ability, and more accurate than the knowledge equations alone because of the genetic program's ability to learn non-ideal relationships inherent in the practical data. The method is demonstrated by applying it to a practical high-frequency, high-power transistor called a HEMT (High-Electron Mobility Transistor) used in wireless transmitters.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115810860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00046
Tonya Fields, G. Hsieh, Jules Chenou
Machine leaning (ML) models must be accurate to produce quality AI solutions. There must be high accuracy in the data and with the model that is built using the data. Online machine learning algorithms fits naturally with use cases that involves time series data. In online environments the data distribution can change over time producing what is known as concept drift. Real-life, real-time, machine learning algorithms operating in dynamic environments must be able to detect any drift or changes in the data distribution and adapt and update the ML model in the face of data that changes over time. In this paper we present the work of a simulated drift added to time series ML models. We simulate drift on Multiplayer perceptron (MLP), Long Short Term Memory (LSTM), Convolution Neural Networks (CNN) and Gated Recurrent Unit (GRU). Results show ML models with flavors of recurrent neural network (RNN) are less sensitive to drift compared to other models. By adding noise to the training set, we can recover accuracy of the model in the face of drift.
{"title":"Mitigating Drift in Time Series Data with Noise Augmentation","authors":"Tonya Fields, G. Hsieh, Jules Chenou","doi":"10.1109/CSCI49370.2019.00046","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00046","url":null,"abstract":"Machine leaning (ML) models must be accurate to produce quality AI solutions. There must be high accuracy in the data and with the model that is built using the data. Online machine learning algorithms fits naturally with use cases that involves time series data. In online environments the data distribution can change over time producing what is known as concept drift. Real-life, real-time, machine learning algorithms operating in dynamic environments must be able to detect any drift or changes in the data distribution and adapt and update the ML model in the face of data that changes over time. In this paper we present the work of a simulated drift added to time series ML models. We simulate drift on Multiplayer perceptron (MLP), Long Short Term Memory (LSTM), Convolution Neural Networks (CNN) and Gated Recurrent Unit (GRU). Results show ML models with flavors of recurrent neural network (RNN) are less sensitive to drift compared to other models. By adding noise to the training set, we can recover accuracy of the model in the face of drift.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114319589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00184
Patrick Philipp, Silvia Becker, Sebastian Robert, D. Hempel, J. Beyerer
In modern medicine, Clinical Practice Guidelines (CPGs) are well-established resources for the appropriate treatment of diseases. Evidence-based CPGs contain recommendations which are based on the state of the art and which have been achieved by consensus of several experts. Nevertheless, there is a potential for problems in translating guideline documents into specific actions for physicians. Therefore we propose to formalize the treatment process in an understandable representation as UML activities together with a domain expert. This formalization serves as a basis for the transfer of knowledge into a model, in this case PROforma, which directly allows execution in an interactive assistance software. The results of this work are part of an ongoing research project on the treatment of colon cancer based on the corresponding evidence-based CPG.
{"title":"Modeling of Medical Treatment Processes for an Interactive Assistance Based on the Translation of UML Activities into PROforma","authors":"Patrick Philipp, Silvia Becker, Sebastian Robert, D. Hempel, J. Beyerer","doi":"10.1109/CSCI49370.2019.00184","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00184","url":null,"abstract":"In modern medicine, Clinical Practice Guidelines (CPGs) are well-established resources for the appropriate treatment of diseases. Evidence-based CPGs contain recommendations which are based on the state of the art and which have been achieved by consensus of several experts. Nevertheless, there is a potential for problems in translating guideline documents into specific actions for physicians. Therefore we propose to formalize the treatment process in an understandable representation as UML activities together with a domain expert. This formalization serves as a basis for the transfer of knowledge into a model, in this case PROforma, which directly allows execution in an interactive assistance software. The results of this work are part of an ongoing research project on the treatment of colon cancer based on the corresponding evidence-based CPG.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"144 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114836553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-01DOI: 10.1109/CSCI49370.2019.00282
S. Salehian, Lunjin Lu
In this paper, we predict memory bandwidth in NUMA architecture by implementing a method based on a supervised machine learning algorithm, the k-Nearest Neighbor (KNN) regression method. The main motivation for using KNN in our model is its flexibility to deal with different data types, its capability to incorporate small data size, its compatibility with irregular feature vectors and its simplicity. Memory bandwidth usage is expressed in terms of total transferred data per execution time, and it changes with respect to problem size and the number of processors. We consider problem size and the number of threads as KNN features. We measure memory bandwidth components, transferred data and execution time for different ranges of problem size and number of threads. Then, considering these values as training data, we predict memory bandwidth for unknown problem sizes and number of threads. The objective of this paper is not to reach accurate predictions for the memory bandwidth components, but rather to use these components to achieve an acceptable level of memory bandwidth prediction. We implement this approach in NUMA architecture and verify its accuracy by applying it to different ranges of regular and irregular high performance computing applications. Using this approach, we can predict memory bandwidth in both dimensions. The highest potential prediction error is observed when training data do not have enough knowledge of specific PSs and number of threads.
{"title":"Memory Bandwidth Prediction in NUMA Architecture Using Supervised Machine Learning","authors":"S. Salehian, Lunjin Lu","doi":"10.1109/CSCI49370.2019.00282","DOIUrl":"https://doi.org/10.1109/CSCI49370.2019.00282","url":null,"abstract":"In this paper, we predict memory bandwidth in NUMA architecture by implementing a method based on a supervised machine learning algorithm, the k-Nearest Neighbor (KNN) regression method. The main motivation for using KNN in our model is its flexibility to deal with different data types, its capability to incorporate small data size, its compatibility with irregular feature vectors and its simplicity. Memory bandwidth usage is expressed in terms of total transferred data per execution time, and it changes with respect to problem size and the number of processors. We consider problem size and the number of threads as KNN features. We measure memory bandwidth components, transferred data and execution time for different ranges of problem size and number of threads. Then, considering these values as training data, we predict memory bandwidth for unknown problem sizes and number of threads. The objective of this paper is not to reach accurate predictions for the memory bandwidth components, but rather to use these components to achieve an acceptable level of memory bandwidth prediction. We implement this approach in NUMA architecture and verify its accuracy by applying it to different ranges of regular and irregular high performance computing applications. Using this approach, we can predict memory bandwidth in both dimensions. The highest potential prediction error is observed when training data do not have enough knowledge of specific PSs and number of threads.","PeriodicalId":103662,"journal":{"name":"2019 International Conference on Computational Science and Computational Intelligence (CSCI)","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116019531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}