Pub Date : 2023-02-16DOI: 10.1109/ICCoSITE57641.2023.10127795
Yus Kelana, S. Rizal, Sofia Saidah
Colon cancer is cancer with the most deaths in Indonesian society. Detection of disease through histopathological images of colon cancer still uses manual methods with readings by doctors. So it is necessary to do a system to detect and classify colon cancer. This study aims to create a colon cancer classification system to reduce the time in classifying the categories of colon cancer. In this study, a classification system for colon cancer was created into two classes, namely adenocarcinomas and polyps. Colon cancer data used in this study is data obtained online through the Kaggle website which consists of 2000 histopathological images measuring 768 pixels in jpeg format. The system is built using the Convolutional Neural Network (CNN) method with the MobileNet architecture. The design of this system is made by analyzing parameters that affect system performance based on the influence of image size, optimizer, learning rate, activation function, and batch size. Parameters used in evaluating system performance are accuracy, precision, recall, and f1-score. The results of testing the system based on parameters obtained the best model with image size 224x224 pixels, Adam optimizer, learning rate 0.0001, sigmoid activation function, and batch size 40. The best results of the best model are 100% accuracy value, 100% precision value, 100% recall value, and 100% f1-score with a loss of 0.000135.
{"title":"Classification of Histopathological Images of Colon Cancer Using Convolutional Neural Network Method","authors":"Yus Kelana, S. Rizal, Sofia Saidah","doi":"10.1109/ICCoSITE57641.2023.10127795","DOIUrl":"https://doi.org/10.1109/ICCoSITE57641.2023.10127795","url":null,"abstract":"Colon cancer is cancer with the most deaths in Indonesian society. Detection of disease through histopathological images of colon cancer still uses manual methods with readings by doctors. So it is necessary to do a system to detect and classify colon cancer. This study aims to create a colon cancer classification system to reduce the time in classifying the categories of colon cancer. In this study, a classification system for colon cancer was created into two classes, namely adenocarcinomas and polyps. Colon cancer data used in this study is data obtained online through the Kaggle website which consists of 2000 histopathological images measuring 768 pixels in jpeg format. The system is built using the Convolutional Neural Network (CNN) method with the MobileNet architecture. The design of this system is made by analyzing parameters that affect system performance based on the influence of image size, optimizer, learning rate, activation function, and batch size. Parameters used in evaluating system performance are accuracy, precision, recall, and f1-score. The results of testing the system based on parameters obtained the best model with image size 224x224 pixels, Adam optimizer, learning rate 0.0001, sigmoid activation function, and batch size 40. The best results of the best model are 100% accuracy value, 100% precision value, 100% recall value, and 100% f1-score with a loss of 0.000135.","PeriodicalId":256184,"journal":{"name":"2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122828838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-16DOI: 10.1109/ICCoSITE57641.2023.10127845
Mera Kartika Delimayanti, Anggi Mardiyono, Bambang Warsuta, Eka Suci Puspitaningrum, R. F. Naryanto, Agustien Naryaningsih
Various unwelcome conditions have existed since the introduction of the novel coronavirus illness (COVID-19). COVID-19 can cause fever, muscle soreness, shortness of breath, cough, headache, and other symptoms. Diagnosis at an early stage is a crucial aspect of successful treatment. Therefore, it is necessary to seek out alternate methods for COVID-19 detection. Among the existing imaging resources, X-ray images are generally accessible and inexpensive. Consequently, an alternate diagnostic tool for detecting COVID-19 instances is provided using available resources. In the first stages of COVID-19, X-rays detected the disease before it spread to the lungs and caused more damage. Machine learning models can help clinicians accomplish jobs more quickly and accurately. In addition to chest X-ray pictures and fundus images, deep learning algorithms have been used to diagnose illnesses. This research was conducted to classify the X-ray chest images in COVID-19 and normal cases based on the public datasets which were used. This analysis uses 5600 images from the accessible resources, and a Convolutional Neural Network (CNN) architecture with the VGG16 algorithm was employed to diagnose COVID19. VGG16 is object identification and classification method that can classify with greater precision than most other deep learning algorithms. Transfer learning and fine-tuning were employed to help for improving the performance. The results showed that the VGG16 network had an accuracy of 98.13%. This research has implications for the early detection of COVID-19 by using X-ray images. The experiment and analysis reveal our suggested method's promising and stable performance compared to the current standard.
{"title":"Implementation of Convolutional Neural Network for COVID19 Screening using X-Rays Images","authors":"Mera Kartika Delimayanti, Anggi Mardiyono, Bambang Warsuta, Eka Suci Puspitaningrum, R. F. Naryanto, Agustien Naryaningsih","doi":"10.1109/ICCoSITE57641.2023.10127845","DOIUrl":"https://doi.org/10.1109/ICCoSITE57641.2023.10127845","url":null,"abstract":"Various unwelcome conditions have existed since the introduction of the novel coronavirus illness (COVID-19). COVID-19 can cause fever, muscle soreness, shortness of breath, cough, headache, and other symptoms. Diagnosis at an early stage is a crucial aspect of successful treatment. Therefore, it is necessary to seek out alternate methods for COVID-19 detection. Among the existing imaging resources, X-ray images are generally accessible and inexpensive. Consequently, an alternate diagnostic tool for detecting COVID-19 instances is provided using available resources. In the first stages of COVID-19, X-rays detected the disease before it spread to the lungs and caused more damage. Machine learning models can help clinicians accomplish jobs more quickly and accurately. In addition to chest X-ray pictures and fundus images, deep learning algorithms have been used to diagnose illnesses. This research was conducted to classify the X-ray chest images in COVID-19 and normal cases based on the public datasets which were used. This analysis uses 5600 images from the accessible resources, and a Convolutional Neural Network (CNN) architecture with the VGG16 algorithm was employed to diagnose COVID19. VGG16 is object identification and classification method that can classify with greater precision than most other deep learning algorithms. Transfer learning and fine-tuning were employed to help for improving the performance. The results showed that the VGG16 network had an accuracy of 98.13%. This research has implications for the early detection of COVID-19 by using X-ray images. The experiment and analysis reveal our suggested method's promising and stable performance compared to the current standard.","PeriodicalId":256184,"journal":{"name":"2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE)","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125043116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-16DOI: 10.1109/ICCoSITE57641.2023.10127771
Shakil Ahmed
Multiple natural disasters cause the loss of billions of dollars, resources, and human lives. Integrated with the latest emerging technologies like Device to Device (D2D) communication and the Internet of Things (IoT) plays a significant role in the current disaster management system by extending coverage area, processing a massive volume of data, minimising the data traffic load, as well as energy-efficient communication. However, several critical security issues still exist, which have attracted many researchers’ attention. Blockchain is one of the latest technologies that could solve D2D and IoT devices’ different security issues. This paper analysed and discussed various security issues for early and post-disaster communication systems. It proposed a Hyper ledger Fabric (HLF) Blockchain-based communication model for the disaster communication system to overcome security threats and enhance the response system by providing authenticated and authorised the participant node. This paper implemented the HLF Blockchain framework to validate the proposed model and measure the system performance matrices.
{"title":"A Novel Data Security Model of D2D Communication Using Blockchain for Disaster","authors":"Shakil Ahmed","doi":"10.1109/ICCoSITE57641.2023.10127771","DOIUrl":"https://doi.org/10.1109/ICCoSITE57641.2023.10127771","url":null,"abstract":"Multiple natural disasters cause the loss of billions of dollars, resources, and human lives. Integrated with the latest emerging technologies like Device to Device (D2D) communication and the Internet of Things (IoT) plays a significant role in the current disaster management system by extending coverage area, processing a massive volume of data, minimising the data traffic load, as well as energy-efficient communication. However, several critical security issues still exist, which have attracted many researchers’ attention. Blockchain is one of the latest technologies that could solve D2D and IoT devices’ different security issues. This paper analysed and discussed various security issues for early and post-disaster communication systems. It proposed a Hyper ledger Fabric (HLF) Blockchain-based communication model for the disaster communication system to overcome security threats and enhance the response system by providing authenticated and authorised the participant node. This paper implemented the HLF Blockchain framework to validate the proposed model and measure the system performance matrices.","PeriodicalId":256184,"journal":{"name":"2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131636703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-16DOI: 10.1109/ICCoSITE57641.2023.10127713
P. Purwono, A. Burhan, K. Nisa', Sony Kartika Wibisono, Iis Setiawan Mangkunegara, Pramesti Dewi, A. Ma’arif, Iswanto Suwarno
Diabetes is a disease that affects many people in the world. Diabetes mellitus is a type of metabolic disease in a person who suffers from blood glucose levels with extreme conditions, namely insufficient insulin production in the human body. Monitoring diabetes is an important concern for researchers because it can be useful for improving the quality of the nursing service system. One of the common conditions in diabetic patients is ulceration which is difficult to detect on time. Technology can minimize the total cost of monitoring chronic diseases such as Diabetes continuously and on time. This research focuses on solutions to produce IoT-based smart diabetic shoes that utilize pressure sensors and the temperature of the feet of people with diabetic feet. Smart diabetic shoes are made using the Lora E32 module to be applied to areas with poor internet connections. The results of the testing carried out for 60 seconds in this study succeeded in detecting the area of the foot that experienced the greatest pressure, which was located on the rear footrest with a portion of pressure of around 25 – 28% of the total body weight. The patient's foot temperature increases when the pedestal load is greater. The Lora E32 module also functions as a media transmitter and receiver at a distance of 2.2 km in sending sensor data.
{"title":"Prototype Design of Smart Diabetic Shoes with Lora Module Communication","authors":"P. Purwono, A. Burhan, K. Nisa', Sony Kartika Wibisono, Iis Setiawan Mangkunegara, Pramesti Dewi, A. Ma’arif, Iswanto Suwarno","doi":"10.1109/ICCoSITE57641.2023.10127713","DOIUrl":"https://doi.org/10.1109/ICCoSITE57641.2023.10127713","url":null,"abstract":"Diabetes is a disease that affects many people in the world. Diabetes mellitus is a type of metabolic disease in a person who suffers from blood glucose levels with extreme conditions, namely insufficient insulin production in the human body. Monitoring diabetes is an important concern for researchers because it can be useful for improving the quality of the nursing service system. One of the common conditions in diabetic patients is ulceration which is difficult to detect on time. Technology can minimize the total cost of monitoring chronic diseases such as Diabetes continuously and on time. This research focuses on solutions to produce IoT-based smart diabetic shoes that utilize pressure sensors and the temperature of the feet of people with diabetic feet. Smart diabetic shoes are made using the Lora E32 module to be applied to areas with poor internet connections. The results of the testing carried out for 60 seconds in this study succeeded in detecting the area of the foot that experienced the greatest pressure, which was located on the rear footrest with a portion of pressure of around 25 – 28% of the total body weight. The patient's foot temperature increases when the pedestal load is greater. The Lora E32 module also functions as a media transmitter and receiver at a distance of 2.2 km in sending sensor data.","PeriodicalId":256184,"journal":{"name":"2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131945391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-16DOI: 10.1109/ICCoSITE57641.2023.10127836
H. Yuliandoko, Farisqi Panduardi, N. Lusi, Sapto Wibowo
Indonesia has the world's largest Muslim population, so the potential for Zakat, Infaq, and Sadaqah (ZIS) funding is enormous. The existence of ZIS in the current Covid-19 pandemic is of extreme importance, as issues such as economic crisis, social inequality, and the rise of dhuafa (poor) are becoming more and more visible. However, in the presence of coronavirus, ZIS has become more challenging to manage and distribute. Bans on gatherings, distancing rules, and other restrictions to prevent the spread of the coronavirus make ZIS more challenging to manage. Therefore, we need innovations to build bridges between people in need and supporters. The Internet of Things (IoT) is one system that can bridge this. In this system, IoT enables machine-to-machine communication and reduces human interaction. This research uses IoT to support Rice Vending Machine with high accuracy. The accuracy is achieved using five load cells and a good rice way-out valve mechanism. In previous machines using a single load cell or open the valve according to a certain time. Based on its test results analysis and MAPE values, we found the machine's accuracy facing obtained very well, and the website monitoring was also very effective. This research provides innovation in ZIS management and dramatically benefits society.
{"title":"Precision Rice Vending Machine by Using Multiple Load Cell and IoT Based","authors":"H. Yuliandoko, Farisqi Panduardi, N. Lusi, Sapto Wibowo","doi":"10.1109/ICCoSITE57641.2023.10127836","DOIUrl":"https://doi.org/10.1109/ICCoSITE57641.2023.10127836","url":null,"abstract":"Indonesia has the world's largest Muslim population, so the potential for Zakat, Infaq, and Sadaqah (ZIS) funding is enormous. The existence of ZIS in the current Covid-19 pandemic is of extreme importance, as issues such as economic crisis, social inequality, and the rise of dhuafa (poor) are becoming more and more visible. However, in the presence of coronavirus, ZIS has become more challenging to manage and distribute. Bans on gatherings, distancing rules, and other restrictions to prevent the spread of the coronavirus make ZIS more challenging to manage. Therefore, we need innovations to build bridges between people in need and supporters. The Internet of Things (IoT) is one system that can bridge this. In this system, IoT enables machine-to-machine communication and reduces human interaction. This research uses IoT to support Rice Vending Machine with high accuracy. The accuracy is achieved using five load cells and a good rice way-out valve mechanism. In previous machines using a single load cell or open the valve according to a certain time. Based on its test results analysis and MAPE values, we found the machine's accuracy facing obtained very well, and the website monitoring was also very effective. This research provides innovation in ZIS management and dramatically benefits society.","PeriodicalId":256184,"journal":{"name":"2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129661646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-16DOI: 10.1109/ICCoSITE57641.2023.10127745
Rayhan Sabian, Antok Supriyanto, Sulistiowati
An account with the name Bjorka claims to have obtained billions of SIM card registration data in the form of Identity Card and Family Card Nuber from the government database of the Ministry of Communication and Informatics (Kemkominfo), people start questioning the cybersecurity of the government database. The appearance of the Bjorka hacker caused various responses on Twitter, some supported Bjorka’s action and some disagree. Hence the need for sentiment analysis to determine public sentiment is more towards negative or positive, so the government can do evaluation as well as strategic planning to deal with future data leaking incidents. This study uses tweets that contain public responses to predict negative or positive sentiment using Support Vector Machine algorithm. From a total of 1017 public response data, have been found 97.35% (990 tweets) to have negative sentiment and 2.65% (27 tweets) have positive sentiment, so it can be known that public responses are towards negative about data leaking by Bjorka. In conclusion, education to the public about data leaks by Bjorka is not the main priority to do for the government. The government can focus more on dealing with other sectors such as improving the security of the data itself.
{"title":"Public Sentiment Analysis of KOMINFO Data Leaking by Bjorka using Support Vector Machine","authors":"Rayhan Sabian, Antok Supriyanto, Sulistiowati","doi":"10.1109/ICCoSITE57641.2023.10127745","DOIUrl":"https://doi.org/10.1109/ICCoSITE57641.2023.10127745","url":null,"abstract":"An account with the name Bjorka claims to have obtained billions of SIM card registration data in the form of Identity Card and Family Card Nuber from the government database of the Ministry of Communication and Informatics (Kemkominfo), people start questioning the cybersecurity of the government database. The appearance of the Bjorka hacker caused various responses on Twitter, some supported Bjorka’s action and some disagree. Hence the need for sentiment analysis to determine public sentiment is more towards negative or positive, so the government can do evaluation as well as strategic planning to deal with future data leaking incidents. This study uses tweets that contain public responses to predict negative or positive sentiment using Support Vector Machine algorithm. From a total of 1017 public response data, have been found 97.35% (990 tweets) to have negative sentiment and 2.65% (27 tweets) have positive sentiment, so it can be known that public responses are towards negative about data leaking by Bjorka. In conclusion, education to the public about data leaks by Bjorka is not the main priority to do for the government. The government can focus more on dealing with other sectors such as improving the security of the data itself.","PeriodicalId":256184,"journal":{"name":"2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134174715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-16DOI: 10.1109/ICCoSITE57641.2023.10127725
Salsabila Mazya Permataning Tyas, R. Sarno, Agus Tri Haryono, Kelly Rossa Sungkono
Stock news is one of the information sources that can used to monitor stock prices. The information from stock news usually contains positive and negative sentiments that can affect stock prices. Therefore, sentiment analysis is needed to process the sentiment of stock news. The stock news dataset is taken from Kaggle. From these data, there is an imbalanced class between positive and negative sentiment. This research proposed a method to solve the imbalance dataset with random oversampling which worked by randomly replicating several minority classes. This research presents several scenarios of pre-processing text with different stages, intending to get high accuracy. The classification method used in this paper is a robustly optimized Bidirectional Transformer Encoder Representation (RoBERTa). Besides that, this paper also compared with baseline of Machine Learning (ML) such as Multinomial Naïve Bayes, Bernoulli Naïve Bayes, Support Vector Machine, Random Forest Classifier, Logistic Regression and used two different text representation such as TF-IDF and Word2Vec. The best result in this research is obtained using RoBERTa method with the fourth scenario of pre-processing text, in which the stage of pre-processing in this scenario only removing hashtag, without removing punctuation, removing the number, converting number, stop word removal, and lemmatization. The performance result is 0.85 precision, 0,84 recall, 0,84 F1-score, and 86% for accuracy result.
{"title":"A Robustly Optimized BERT using Random Oversampling for Analyzing Imbalanced Stock News Sentiment Data","authors":"Salsabila Mazya Permataning Tyas, R. Sarno, Agus Tri Haryono, Kelly Rossa Sungkono","doi":"10.1109/ICCoSITE57641.2023.10127725","DOIUrl":"https://doi.org/10.1109/ICCoSITE57641.2023.10127725","url":null,"abstract":"Stock news is one of the information sources that can used to monitor stock prices. The information from stock news usually contains positive and negative sentiments that can affect stock prices. Therefore, sentiment analysis is needed to process the sentiment of stock news. The stock news dataset is taken from Kaggle. From these data, there is an imbalanced class between positive and negative sentiment. This research proposed a method to solve the imbalance dataset with random oversampling which worked by randomly replicating several minority classes. This research presents several scenarios of pre-processing text with different stages, intending to get high accuracy. The classification method used in this paper is a robustly optimized Bidirectional Transformer Encoder Representation (RoBERTa). Besides that, this paper also compared with baseline of Machine Learning (ML) such as Multinomial Naïve Bayes, Bernoulli Naïve Bayes, Support Vector Machine, Random Forest Classifier, Logistic Regression and used two different text representation such as TF-IDF and Word2Vec. The best result in this research is obtained using RoBERTa method with the fourth scenario of pre-processing text, in which the stage of pre-processing in this scenario only removing hashtag, without removing punctuation, removing the number, converting number, stop word removal, and lemmatization. The performance result is 0.85 precision, 0,84 recall, 0,84 F1-score, and 86% for accuracy result.","PeriodicalId":256184,"journal":{"name":"2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE)","volume":"159 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132621508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-16DOI: 10.1109/ICCoSITE57641.2023.10127790
Y. Heryadi, B. Wijanarko, Dina Fitria Murad, C. Tho, Kiyota Hashimoto
The transformer model has become a state-of-the-art model in Natural Language Processing. The initial transformer model, known as the vanilla transformer model, is designed to improve some prominent models in sequence modeling and transduction problems such as language modeling and machine translation. The initial transformer model has 6 stacks of identical encoder-decoder layers with an attention mechanism whose aim is to push limitations of common recurrent language models and encoder-decoder architectures. Its outstanding performance has inspired many researchers to extend the architecture to improve its performance and computation efficiency. Despite many extensions to the vanilla transformer, there is no clear explanation of the encoder-decoder set out depth in the vanilla transformer model. This paper presents exploration results on the effect of combination encoder-decoder layer depth and activation function in the feed-forward layer of the vanilla transformer model on its performance. The model is tested to address a downstream task: text translation from Bahasa Indonesia to the Sundanese language. Although the value difference is not significantly large, the empirical results show that the combination of depth = 2 with Sigmoid, Tanh, and ReLU activation function; and d = 6 with ReLU activation shows the highest average training accuracy. Interestingly, d = 6 and ReLU show the lowest average training and validation loss. However, statistically, there is no significant difference between depth and activation functions.
{"title":"Revalidating the Encoder-Decoder Depths and Activation Function to Find Optimum Vanilla Transformer Model","authors":"Y. Heryadi, B. Wijanarko, Dina Fitria Murad, C. Tho, Kiyota Hashimoto","doi":"10.1109/ICCoSITE57641.2023.10127790","DOIUrl":"https://doi.org/10.1109/ICCoSITE57641.2023.10127790","url":null,"abstract":"The transformer model has become a state-of-the-art model in Natural Language Processing. The initial transformer model, known as the vanilla transformer model, is designed to improve some prominent models in sequence modeling and transduction problems such as language modeling and machine translation. The initial transformer model has 6 stacks of identical encoder-decoder layers with an attention mechanism whose aim is to push limitations of common recurrent language models and encoder-decoder architectures. Its outstanding performance has inspired many researchers to extend the architecture to improve its performance and computation efficiency. Despite many extensions to the vanilla transformer, there is no clear explanation of the encoder-decoder set out depth in the vanilla transformer model. This paper presents exploration results on the effect of combination encoder-decoder layer depth and activation function in the feed-forward layer of the vanilla transformer model on its performance. The model is tested to address a downstream task: text translation from Bahasa Indonesia to the Sundanese language. Although the value difference is not significantly large, the empirical results show that the combination of depth = 2 with Sigmoid, Tanh, and ReLU activation function; and d = 6 with ReLU activation shows the highest average training accuracy. Interestingly, d = 6 and ReLU show the lowest average training and validation loss. However, statistically, there is no significant difference between depth and activation functions.","PeriodicalId":256184,"journal":{"name":"2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE)","volume":"397 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113997160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-16DOI: 10.1109/ICCoSITE57641.2023.10127673
R. B. Widodo, Devina Trixie, W. Swastika
The operation of a computer required humans to use several parts of their body. However, there were some conditions where humans cannot operate computers correctly or in a normal position; examples of these conditions were accident victims and people with disabilities. Therefore, a system was needed to help make it easier for these people to operate the computer. This study developed a system that can classify click types using EMG sensors, the K-NN method, and the SVM method. EMG sensors helped take data in the form of signals from human muscle contractions which will later be classified into left-click and right-click. At the same time, it was useful for classifying these types of clicks for the K-NN and SVM methods. Data from EMG sensors were trained using the K-NN and SVM methods using 54 data sets in each class, namely left-click and right-click classes. The K-NN method was trained using k=3, 5, 7, 9, and 11. The SVM method used linear kernels, Radial Basis Function (RBF), polynomials, and sigmoids. After that, the accuracy values of the two methods will be compared. The study has successfully classified the types of clicks based on the input from the EMG sensor using the K-NN method with the highest accuracy results using k=3, which was 81.81%, and the SVM method using polynomial kernels which were 84.84%. The highest accuracy value was obtained by comparing the two methods, namely using the polynomial kernel SVM method. Adding datasets and conducting experiments using other methods as further comparisons can be used to improve system accuracy.
{"title":"Study of EMG-based Mouse Clicks Type Detection","authors":"R. B. Widodo, Devina Trixie, W. Swastika","doi":"10.1109/ICCoSITE57641.2023.10127673","DOIUrl":"https://doi.org/10.1109/ICCoSITE57641.2023.10127673","url":null,"abstract":"The operation of a computer required humans to use several parts of their body. However, there were some conditions where humans cannot operate computers correctly or in a normal position; examples of these conditions were accident victims and people with disabilities. Therefore, a system was needed to help make it easier for these people to operate the computer. This study developed a system that can classify click types using EMG sensors, the K-NN method, and the SVM method. EMG sensors helped take data in the form of signals from human muscle contractions which will later be classified into left-click and right-click. At the same time, it was useful for classifying these types of clicks for the K-NN and SVM methods. Data from EMG sensors were trained using the K-NN and SVM methods using 54 data sets in each class, namely left-click and right-click classes. The K-NN method was trained using k=3, 5, 7, 9, and 11. The SVM method used linear kernels, Radial Basis Function (RBF), polynomials, and sigmoids. After that, the accuracy values of the two methods will be compared. The study has successfully classified the types of clicks based on the input from the EMG sensor using the K-NN method with the highest accuracy results using k=3, which was 81.81%, and the SVM method using polynomial kernels which were 84.84%. The highest accuracy value was obtained by comparing the two methods, namely using the polynomial kernel SVM method. Adding datasets and conducting experiments using other methods as further comparisons can be used to improve system accuracy.","PeriodicalId":256184,"journal":{"name":"2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114231399","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-16DOI: 10.1109/ICCoSITE57641.2023.10127747
Shinta Aprilia Safitri, A. H. Saputro
The use of deep learning model with hyperspectral image had been developed as a food identification system. This method was known to have a high level of accuracy without damaging the test sample. However, most of the CNN models developed were only capable to identify single target. It was inefficient when used for multiple targets such as identification of rice quality, due to it represents by multiple parameters. The model must be trained separately for each target. In this study, we proposed a model called Multi-output Spectral Xception that could classify objects in multi-class multi-output problems with hyperspectral image input. The proposed model was built by replacing 2D convolution layer with 3D convolution layer. It effectively extracts the spectral and spatial features. The model was evaluated using Indonesian rice with eight varieties and two type of cultivation techniques. Performance evaluations were done by calculate its accuracy using the confusion matrix, then compared it with state-of-the-art models. The result showed that the proposed model achieved the best performance among the other models, which was 97,82% for its average accuracy score.
{"title":"Identification of Rice Varieties and Cultivation Techniques based-on Hyperspectral Image using Multi-output Spectral Xception","authors":"Shinta Aprilia Safitri, A. H. Saputro","doi":"10.1109/ICCoSITE57641.2023.10127747","DOIUrl":"https://doi.org/10.1109/ICCoSITE57641.2023.10127747","url":null,"abstract":"The use of deep learning model with hyperspectral image had been developed as a food identification system. This method was known to have a high level of accuracy without damaging the test sample. However, most of the CNN models developed were only capable to identify single target. It was inefficient when used for multiple targets such as identification of rice quality, due to it represents by multiple parameters. The model must be trained separately for each target. In this study, we proposed a model called Multi-output Spectral Xception that could classify objects in multi-class multi-output problems with hyperspectral image input. The proposed model was built by replacing 2D convolution layer with 3D convolution layer. It effectively extracts the spectral and spatial features. The model was evaluated using Indonesian rice with eight varieties and two type of cultivation techniques. Performance evaluations were done by calculate its accuracy using the confusion matrix, then compared it with state-of-the-art models. The result showed that the proposed model achieved the best performance among the other models, which was 97,82% for its average accuracy score.","PeriodicalId":256184,"journal":{"name":"2023 International Conference on Computer Science, Information Technology and Engineering (ICCoSITE)","volume":"24 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116829915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}