Pub Date : 2021-12-01DOI: 10.1109/ICICyTA53712.2021.9689150
Aji Gautama Putrada, M. Abdurohman
Maintaining temperature stability is paramount in research related to vaccine storage refrigerators. However, none have implemented a monitoring system with anomaly detection (AD) and alerts for anomalous temperatures in the vaccine storage refrigerators. The purpose of this study is to compare several AD methods to provide an optimum temperature alert system in an IoT-Based vaccine storage freezer temperature monitoring system. To implement the proposed system, an internet of things (IoT) architecture-based system is created with the message queue telemetry transport (MQTT) communication protocol and other specifications, such as a PT-100 sensor and a NodeMCU microcontroller. Based on the three AD methods applied and tested, histogram based outlier score (HBOS), minimum covariance determinant (MCD), and one class support vector machine (OCSVM), MCD has the best area under curve (AUC) score of 0.9999. Based on the value of sensitivity and specificity, MCD also has the most balanced value compared to other AD methods with values of 1 and 0.99, respectively. The contribution given by this research is an IoT system that can measure and monitor the temperature of the vaccine storage refrigerator and provide alerts if there are anomalies in the refrigerator temperature measurement.
{"title":"Anomaly Detection on an IoT-Based Vaccine Storage Refrigerator Temperature Monitoring System","authors":"Aji Gautama Putrada, M. Abdurohman","doi":"10.1109/ICICyTA53712.2021.9689150","DOIUrl":"https://doi.org/10.1109/ICICyTA53712.2021.9689150","url":null,"abstract":"Maintaining temperature stability is paramount in research related to vaccine storage refrigerators. However, none have implemented a monitoring system with anomaly detection (AD) and alerts for anomalous temperatures in the vaccine storage refrigerators. The purpose of this study is to compare several AD methods to provide an optimum temperature alert system in an IoT-Based vaccine storage freezer temperature monitoring system. To implement the proposed system, an internet of things (IoT) architecture-based system is created with the message queue telemetry transport (MQTT) communication protocol and other specifications, such as a PT-100 sensor and a NodeMCU microcontroller. Based on the three AD methods applied and tested, histogram based outlier score (HBOS), minimum covariance determinant (MCD), and one class support vector machine (OCSVM), MCD has the best area under curve (AUC) score of 0.9999. Based on the value of sensitivity and specificity, MCD also has the most balanced value compared to other AD methods with values of 1 and 0.99, respectively. The contribution given by this research is an IoT system that can measure and monitor the temperature of the vaccine storage refrigerator and provide alerts if there are anomalies in the refrigerator temperature measurement.","PeriodicalId":448148,"journal":{"name":"2021 International Conference on Intelligent Cybernetics Technology & Applications (ICICyTA)","volume":"63 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122322195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICICyTA53712.2021.9689142
M. Farhan, Satria Mandala, M. Pramudyo
Valvular Heart Disease (VHD) is a type of heart valve disease that is triggered by a disorder or abnormality of one or more of the four hearts that makes it difficult for blood to flow into the next chamber or blood vessel, or vice versa. In recent years, many methods have been proposed to detect the occurrence of VHD. With advances in technology to detect these abnormalities can use telemedicine technology. This paper analyzes the PCG signal (Phonocardiogram) from the patient. There are 3 stages in detecting VHD, namely denoising, feature extraction, and PCG signal classification. The accuracy value obtained from the whole detection process can change and be influenced by the results of the classification algorithm and hyperparameter. Therefore, the selection of the right hyperparameter is important. Of the many pieces of literature that propose VHD detection. To solve the above problems, this research proposes the development of a classification algorithm that supports the improvement of VHD detection accuracy. In addition, prototypes based on the proposed algorithm will also be developed. This research also analyzes the accuracy of the proposed prototype detection. The methods used in this research are 1. Literature study on VHD detection, 2. STFT Denoising, 3. MFCC Feature Extraction, 4. SVM classification algorithm development, 5. Evaluation, 6. Tune SVM algorithm to get higher score. The performance test results show that the proposed algorithm has achieved an average accuracy of 99.5%%, F1 Score is 99%, recall is 99%, precision 100%.
{"title":"Detecting Heart Valve Disease Using Support Vector Machine Algorithm based on Phonocardiogram Signal","authors":"M. Farhan, Satria Mandala, M. Pramudyo","doi":"10.1109/ICICyTA53712.2021.9689142","DOIUrl":"https://doi.org/10.1109/ICICyTA53712.2021.9689142","url":null,"abstract":"Valvular Heart Disease (VHD) is a type of heart valve disease that is triggered by a disorder or abnormality of one or more of the four hearts that makes it difficult for blood to flow into the next chamber or blood vessel, or vice versa. In recent years, many methods have been proposed to detect the occurrence of VHD. With advances in technology to detect these abnormalities can use telemedicine technology. This paper analyzes the PCG signal (Phonocardiogram) from the patient. There are 3 stages in detecting VHD, namely denoising, feature extraction, and PCG signal classification. The accuracy value obtained from the whole detection process can change and be influenced by the results of the classification algorithm and hyperparameter. Therefore, the selection of the right hyperparameter is important. Of the many pieces of literature that propose VHD detection. To solve the above problems, this research proposes the development of a classification algorithm that supports the improvement of VHD detection accuracy. In addition, prototypes based on the proposed algorithm will also be developed. This research also analyzes the accuracy of the proposed prototype detection. The methods used in this research are 1. Literature study on VHD detection, 2. STFT Denoising, 3. MFCC Feature Extraction, 4. SVM classification algorithm development, 5. Evaluation, 6. Tune SVM algorithm to get higher score. The performance test results show that the proposed algorithm has achieved an average accuracy of 99.5%%, F1 Score is 99%, recall is 99%, precision 100%.","PeriodicalId":448148,"journal":{"name":"2021 International Conference on Intelligent Cybernetics Technology & Applications (ICICyTA)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121721055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICICyTA53712.2021.9689200
Rofif Irsyad Fakhruddin, M. Abdurohman, Aji Gautama Putrada
With the use of low-cost passive infrared (PIR) sensors in detecting movement, forming a wireless sensor network (WSN) combined with activity recognition (AR), activities or movements that exist in each room can be detected and can be used for health, home automation, and security purposes. Other studies have proven that the hierarchical hidden Markov model (HHMM) method, an a posteriori method is more accurate than unsupervised classification methods such as Naïve Bayes but in another study, unsupervised methods such as k-nearest neighbors (KNN) can show high performance because previously, the datasets go through pre-processing steps. The purpose of this study is to improve the performance of PIR sensor network-based AR using PCA as a pre-processing method and compare the performance with AR in previous studies. In addition, KNN is used as the classification method for AR. To do that, a PIR sensor network needs to be built. 4 PIR sensor nodes are used throughout a test environment house. There are 37150 data that has been collected from all PIR sensors stored in a span of 21 days to build the KNN model. The accuracy results obtained from the KNN model for AR classification is 0.94. The PCA-KNN proposed in this research proves to have higher performance than other studies that also implement AR with PIR sensor network. The proposed method is also a low-cost solution compared to other studies that also implement AR but with more complex sensor combinations.
{"title":"Improving PIR Sensor Network-Based Activity Recognition with PCA and KNN","authors":"Rofif Irsyad Fakhruddin, M. Abdurohman, Aji Gautama Putrada","doi":"10.1109/ICICyTA53712.2021.9689200","DOIUrl":"https://doi.org/10.1109/ICICyTA53712.2021.9689200","url":null,"abstract":"With the use of low-cost passive infrared (PIR) sensors in detecting movement, forming a wireless sensor network (WSN) combined with activity recognition (AR), activities or movements that exist in each room can be detected and can be used for health, home automation, and security purposes. Other studies have proven that the hierarchical hidden Markov model (HHMM) method, an a posteriori method is more accurate than unsupervised classification methods such as Naïve Bayes but in another study, unsupervised methods such as k-nearest neighbors (KNN) can show high performance because previously, the datasets go through pre-processing steps. The purpose of this study is to improve the performance of PIR sensor network-based AR using PCA as a pre-processing method and compare the performance with AR in previous studies. In addition, KNN is used as the classification method for AR. To do that, a PIR sensor network needs to be built. 4 PIR sensor nodes are used throughout a test environment house. There are 37150 data that has been collected from all PIR sensors stored in a span of 21 days to build the KNN model. The accuracy results obtained from the KNN model for AR classification is 0.94. The PCA-KNN proposed in this research proves to have higher performance than other studies that also implement AR with PIR sensor network. The proposed method is also a low-cost solution compared to other studies that also implement AR but with more complex sensor combinations.","PeriodicalId":448148,"journal":{"name":"2021 International Conference on Intelligent Cybernetics Technology & Applications (ICICyTA)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131949253","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICICyTA53712.2021.9689141
Abdulhakim Al-Ezzi, N. Kamel, Alaa Al-shargabi, N. Yahya, I. Faye, M. I. Al-Hiyali
Electroencephalogram (EEG) plays an essential part in identifying brain function and behaviors for different mental states. Nevertheless, the captured electrical activity is always found to be contaminated with various artifacts that negatively influence the accuracy of EEG analysis. Therefore, it is crucial to build a model to constructively identify and extract clean EEG recordings during the investigation of the dynamical brain networks. To improve the estimation of effective connectivity (EC) and EEG signal denoising, an EEG decomposition method based on the singular value decomposition (SVD) analysis was proposed. The main purpose of the decomposition is to create a method to estimate a signal that represents most of the principal components of the information contained in each brain region before calculating the partial directed coherence (PDC). SVD-based technique and PDC were used to quantify the causal influence of default mode network (DMN) regions on each other and track the changes in brain connectivity. Results of statistical analysis on the effective connectivity using the SVD-PDC algorithm have shown to better reflect the flow of causal information than the independent component analysis (ICA)-PDC. The hybrid algorithm (SVD-PDC) is proposed in this work as an alternative robust adaptive feature extraction method for EEG signals to improve the detection of brain effective connectivity.
{"title":"SVD-Based Feature Extraction Technique for The Improvement of Effective Connectivity Detection","authors":"Abdulhakim Al-Ezzi, N. Kamel, Alaa Al-shargabi, N. Yahya, I. Faye, M. I. Al-Hiyali","doi":"10.1109/ICICyTA53712.2021.9689141","DOIUrl":"https://doi.org/10.1109/ICICyTA53712.2021.9689141","url":null,"abstract":"Electroencephalogram (EEG) plays an essential part in identifying brain function and behaviors for different mental states. Nevertheless, the captured electrical activity is always found to be contaminated with various artifacts that negatively influence the accuracy of EEG analysis. Therefore, it is crucial to build a model to constructively identify and extract clean EEG recordings during the investigation of the dynamical brain networks. To improve the estimation of effective connectivity (EC) and EEG signal denoising, an EEG decomposition method based on the singular value decomposition (SVD) analysis was proposed. The main purpose of the decomposition is to create a method to estimate a signal that represents most of the principal components of the information contained in each brain region before calculating the partial directed coherence (PDC). SVD-based technique and PDC were used to quantify the causal influence of default mode network (DMN) regions on each other and track the changes in brain connectivity. Results of statistical analysis on the effective connectivity using the SVD-PDC algorithm have shown to better reflect the flow of causal information than the independent component analysis (ICA)-PDC. The hybrid algorithm (SVD-PDC) is proposed in this work as an alternative robust adaptive feature extraction method for EEG signals to improve the detection of brain effective connectivity.","PeriodicalId":448148,"journal":{"name":"2021 International Conference on Intelligent Cybernetics Technology & Applications (ICICyTA)","volume":"193 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121075891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICICyTA53712.2021.9689089
Muhammad Yunis Daha, M. Zahid, A. Alashhab, Shahab Ul Hassan
The complexity of IP networks leads toward the minimum utilization of network resources. To address this problem the concept of SDN (Software Defined Network) has been introduced. SDN is a revolutionary networking paradigm that overcomes the limits of standard IP networks while also modernizing network infrastructures. SDN makes the IP networks into programable networks and upgrade the network infrastructure. Like traditional IP networks, SDN technology can experience network failures. Several research papers have investigated this issue utilizing several methods. One technique in SDN is to employ community detection methods for link failure recovery. Although a variety of comparing analyses have been given across community detection approaches, however, they have not considered the special comparative analysis for link failure recovery situations in SDN. This paper presents a comparative analysis of the most likely used community detection methods based on the Dijkstra algorithm for link failure recovery in SDN. Extensive simulations are performed to evaluate the performance of the community detection methods. The simulation results depict that the Infomap and Louvain community detection methods perform better and have more modularity by 0.12% and less average end-to-end latency by 27%, avg data packet loss by 0.8% than the Girvan and Newman community detection methods.
{"title":"Comparative Analysis of Community Detection Methods for Link Failure Recovery in Software Defined Networks","authors":"Muhammad Yunis Daha, M. Zahid, A. Alashhab, Shahab Ul Hassan","doi":"10.1109/ICICyTA53712.2021.9689089","DOIUrl":"https://doi.org/10.1109/ICICyTA53712.2021.9689089","url":null,"abstract":"The complexity of IP networks leads toward the minimum utilization of network resources. To address this problem the concept of SDN (Software Defined Network) has been introduced. SDN is a revolutionary networking paradigm that overcomes the limits of standard IP networks while also modernizing network infrastructures. SDN makes the IP networks into programable networks and upgrade the network infrastructure. Like traditional IP networks, SDN technology can experience network failures. Several research papers have investigated this issue utilizing several methods. One technique in SDN is to employ community detection methods for link failure recovery. Although a variety of comparing analyses have been given across community detection approaches, however, they have not considered the special comparative analysis for link failure recovery situations in SDN. This paper presents a comparative analysis of the most likely used community detection methods based on the Dijkstra algorithm for link failure recovery in SDN. Extensive simulations are performed to evaluate the performance of the community detection methods. The simulation results depict that the Infomap and Louvain community detection methods perform better and have more modularity by 0.12% and less average end-to-end latency by 27%, avg data packet loss by 0.8% than the Girvan and Newman community detection methods.","PeriodicalId":448148,"journal":{"name":"2021 International Conference on Intelligent Cybernetics Technology & Applications (ICICyTA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115316394","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICICyTA53712.2021.9689117
Sumayema Kabir Rocky, L. Rahim, Rohiza Ahmad, A. Sarlan
Requirement elicitation plays crucial part in success rates of a software project. Now a lot of project management methods are moving towards agile development which allows changes in requirement specification in any phase of project lifecycle. In a project there are many stakeholders. But not all of them has the same relevancy to the software requirements. Sometimes some requirement changes happen with wrong stakeholder source. These changes cause problem later. Such as unused function or missing an important function. This problem increases unnecessary cost for the project. Furthermore, change is inevitable in agile development. With frequent changes, it is difficult to track all the changes made. On the other hand, blockchain gives an immutable, traceable, decentralized platform where data can be added through consensus. In a hierarchical permissioned blockchain, the level of permission can be defined for each group or individual nodes through smart contract. Therefore, a hierarchical permissioned blockchain for stakeholder permission to make changes and traceability of mid-development requirement change is proposed. Here, it is proposed that there should be hierarchy among the stakeholders and levels of permission they have to change one or more part of the requirements according to the project needs. This control may be implemented with smart contracts and if needed, with intelligent agents as well. Also, with the immutability of blockchain and smart contracts and external databases, traceability will also be ensured.
{"title":"Hierarchical Permissioned Blockchain and Traceability Of Requirement Changes","authors":"Sumayema Kabir Rocky, L. Rahim, Rohiza Ahmad, A. Sarlan","doi":"10.1109/ICICyTA53712.2021.9689117","DOIUrl":"https://doi.org/10.1109/ICICyTA53712.2021.9689117","url":null,"abstract":"Requirement elicitation plays crucial part in success rates of a software project. Now a lot of project management methods are moving towards agile development which allows changes in requirement specification in any phase of project lifecycle. In a project there are many stakeholders. But not all of them has the same relevancy to the software requirements. Sometimes some requirement changes happen with wrong stakeholder source. These changes cause problem later. Such as unused function or missing an important function. This problem increases unnecessary cost for the project. Furthermore, change is inevitable in agile development. With frequent changes, it is difficult to track all the changes made. On the other hand, blockchain gives an immutable, traceable, decentralized platform where data can be added through consensus. In a hierarchical permissioned blockchain, the level of permission can be defined for each group or individual nodes through smart contract. Therefore, a hierarchical permissioned blockchain for stakeholder permission to make changes and traceability of mid-development requirement change is proposed. Here, it is proposed that there should be hierarchy among the stakeholders and levels of permission they have to change one or more part of the requirements according to the project needs. This control may be implemented with smart contracts and if needed, with intelligent agents as well. Also, with the immutability of blockchain and smart contracts and external databases, traceability will also be ensured.","PeriodicalId":448148,"journal":{"name":"2021 International Conference on Intelligent Cybernetics Technology & Applications (ICICyTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122622904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICICyTA53712.2021.9689204
A. Algamili, M. Khir, A. Ahmed, O. L. Al-Mahdi, S. S. Ba-Hashwan, S. S. Alabsi
Gas detection sensor is crucial in many practical applications. However, numerous of the existing gas sensors still suffering from high power consumption, damping, and poor accuracy. These factors have a significant impact on the gas detection sensor's sensitivity and reliability. A Micro-Electro-Mechanical System (MEMS) is presented in this paper, along with its model with high efficiency. The sensor is based on standard Polysilicon Multi-Users-MEMS-Process (PolyMUMPs). The detection of gaseous species is dependent on a changes in the sensor's resonance frequency. The resonance frequency, quality factor, and mass sensitivity are observed to reduce as the beam length increases and to rise as the beam width increases. While overall mass rises as the length/width of the beam both increases. The analytical findings of the resonance frequency, quality factor, and mass sensitivity are found to be 9.3747 kHz, 4.5183, and 5.1676 mHz/pg, respectively.
{"title":"Modeling of the PolyMUMPs-Based MEMS Sensor for Application in Trace Gas Detection","authors":"A. Algamili, M. Khir, A. Ahmed, O. L. Al-Mahdi, S. S. Ba-Hashwan, S. S. Alabsi","doi":"10.1109/ICICyTA53712.2021.9689204","DOIUrl":"https://doi.org/10.1109/ICICyTA53712.2021.9689204","url":null,"abstract":"Gas detection sensor is crucial in many practical applications. However, numerous of the existing gas sensors still suffering from high power consumption, damping, and poor accuracy. These factors have a significant impact on the gas detection sensor's sensitivity and reliability. A Micro-Electro-Mechanical System (MEMS) is presented in this paper, along with its model with high efficiency. The sensor is based on standard Polysilicon Multi-Users-MEMS-Process (PolyMUMPs). The detection of gaseous species is dependent on a changes in the sensor's resonance frequency. The resonance frequency, quality factor, and mass sensitivity are observed to reduce as the beam length increases and to rise as the beam width increases. While overall mass rises as the length/width of the beam both increases. The analytical findings of the resonance frequency, quality factor, and mass sensitivity are found to be 9.3747 kHz, 4.5183, and 5.1676 mHz/pg, respectively.","PeriodicalId":448148,"journal":{"name":"2021 International Conference on Intelligent Cybernetics Technology & Applications (ICICyTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129790984","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICICyTA53712.2021.9689148
Nurul Syifa Shafirah Omar, L. T. Jung, L. Rahim
This study is to determine the correlation amongst chilled water temperature supply from Gas District Cooling (GDC) operations and demand for cooling and energy demand from Data Centres (DC) operations. At first, the GDC-DC modelling was proposed by Hitachi Research Team in UTP. This is because, UTP has and advantage of GDC to house the campus region with electrical energy and chilled water for air conditioners in UTP's academic buildings, chancellor complex, and UTP mosque. This paper aims to find contribution of real-time system in optimizing the cloud DC that can impact the cooling demand & energy demand. The studies on the demand of cooling and energy from the operation of DC has been tested on Linux real-time operating systems with AMD FX850 processors with selected job scheduling algorithms. Pearson's r correlation analysis between GDC & DC has shown that there is a significant disparity between the chilled water temperature supply from GDC with cooling demand from DC where $mathrm{r}=0.130$ which is more than 0.05. Apart from that, Round Robin (RR) algorithm has reduced power consumption in DC but not reducing the cooling demand, while First In First Out (FIFO) algorithm has reduced the cooling demand in DC and the trend is followed by power consumption.
本研究旨在确定来自燃气区域供冷(GDC)运营的冷冻水温度供应与来自数据中心(DC)运营的冷却和能源需求之间的相关性。首先,GDC-DC模型是由日立研究团队在UTP中提出的。这是因为,UTP拥有GDC的优势,可以为校园区域提供电能和制冷水,用于UTP教学楼、校长大楼和UTP清真寺的空调。本文旨在寻找实时系统对云直流优化的贡献,从而影响冷却需求和能源需求。本文在采用AMD FX850处理器的Linux实时操作系统上,采用选定的作业调度算法对直流数据中心运行的散热和能耗需求进行了测试。GDC和DC之间的Pearson’s r相关分析表明,GDC的冷冻水温度供应与DC的冷却需求之间存在显著差异,其中$ mathm {r}=0.130$,大于0.05。此外,RR (Round Robin)算法降低了直流系统的功耗,但没有降低冷却需求;FIFO (First in First Out)算法降低了直流系统的冷却需求,功耗也有降低的趋势。
{"title":"Correlating Supply & Demand of Cooling & Energy between Gas District Cooling Model with Data Center","authors":"Nurul Syifa Shafirah Omar, L. T. Jung, L. Rahim","doi":"10.1109/ICICyTA53712.2021.9689148","DOIUrl":"https://doi.org/10.1109/ICICyTA53712.2021.9689148","url":null,"abstract":"This study is to determine the correlation amongst chilled water temperature supply from Gas District Cooling (GDC) operations and demand for cooling and energy demand from Data Centres (DC) operations. At first, the GDC-DC modelling was proposed by Hitachi Research Team in UTP. This is because, UTP has and advantage of GDC to house the campus region with electrical energy and chilled water for air conditioners in UTP's academic buildings, chancellor complex, and UTP mosque. This paper aims to find contribution of real-time system in optimizing the cloud DC that can impact the cooling demand & energy demand. The studies on the demand of cooling and energy from the operation of DC has been tested on Linux real-time operating systems with AMD FX850 processors with selected job scheduling algorithms. Pearson's r correlation analysis between GDC & DC has shown that there is a significant disparity between the chilled water temperature supply from GDC with cooling demand from DC where $mathrm{r}=0.130$ which is more than 0.05. Apart from that, Round Robin (RR) algorithm has reduced power consumption in DC but not reducing the cooling demand, while First In First Out (FIFO) algorithm has reduced the cooling demand in DC and the trend is followed by power consumption.","PeriodicalId":448148,"journal":{"name":"2021 International Conference on Intelligent Cybernetics Technology & Applications (ICICyTA)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129457685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1109/ICICyTA53712.2021.9689092
M. I. Al-Hiyali, N. Yahya, I. Faye, Abdulhakim Al-Ezzi
Resting-state brain functional connectivity (FC) patterns play an essential role in the development of autism spectrum disorder (ASD) classification models based on functional magnetic resonance imaging (fMRI) data. Due to the limited number of models in the literature for identifying ASD subtypes, a multiclass classification is introduced in this study. The aim of this study is to develop an ASD diagnosis model using convolutional neural networks (CNN) with dynamic FC as inputs. The rs-fMRI dataset used in this study consists of 35 individuals from multiple sites labeled based on autistic disorder subtypes (ASD, APD, and PDD-NOS) and normal control (NC). The Atlas for Automated Anatomical Labeling (AAL) is selected as the brain atlas for defining brain nodes. The BOLD signals of the nodes are extracted and then the dynamic FC between brain nodes is determined using our new metric wavelet coherence (WCF), where WCF quantifies the overall variability of coherence in specific low-frequency scales over the time. Based on the statistical analysis of WCF values between ASD and NC, 6 pairwise nodes are identified. Classification algorithm is developed using CNN, and wavelet coherence maps (scalogram) of pairwise nodes. The training and testing of the CNN is using a cross-validation framework. The results of the multiclass classification provided an average accuracy of 88.6%. The results of this study illustrate the good potential of the wavelet coherence technique in analysing dynamics FC and open up possibilities for its application in diagnostic models, not only for ASD but also for other neuropsychiatric disorders.
{"title":"Classification of ASD Subtypes Based on Coherence Features of BOLD Resting-state fMRI Signals","authors":"M. I. Al-Hiyali, N. Yahya, I. Faye, Abdulhakim Al-Ezzi","doi":"10.1109/ICICyTA53712.2021.9689092","DOIUrl":"https://doi.org/10.1109/ICICyTA53712.2021.9689092","url":null,"abstract":"Resting-state brain functional connectivity (FC) patterns play an essential role in the development of autism spectrum disorder (ASD) classification models based on functional magnetic resonance imaging (fMRI) data. Due to the limited number of models in the literature for identifying ASD subtypes, a multiclass classification is introduced in this study. The aim of this study is to develop an ASD diagnosis model using convolutional neural networks (CNN) with dynamic FC as inputs. The rs-fMRI dataset used in this study consists of 35 individuals from multiple sites labeled based on autistic disorder subtypes (ASD, APD, and PDD-NOS) and normal control (NC). The Atlas for Automated Anatomical Labeling (AAL) is selected as the brain atlas for defining brain nodes. The BOLD signals of the nodes are extracted and then the dynamic FC between brain nodes is determined using our new metric wavelet coherence (WCF), where WCF quantifies the overall variability of coherence in specific low-frequency scales over the time. Based on the statistical analysis of WCF values between ASD and NC, 6 pairwise nodes are identified. Classification algorithm is developed using CNN, and wavelet coherence maps (scalogram) of pairwise nodes. The training and testing of the CNN is using a cross-validation framework. The results of the multiclass classification provided an average accuracy of 88.6%. The results of this study illustrate the good potential of the wavelet coherence technique in analysing dynamics FC and open up possibilities for its application in diagnostic models, not only for ASD but also for other neuropsychiatric disorders.","PeriodicalId":448148,"journal":{"name":"2021 International Conference on Intelligent Cybernetics Technology & Applications (ICICyTA)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124296732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}