Pub Date : 2021-09-23DOI: 10.1109/ICITech50181.2021.9590146
Trianggoro Wiradinata, Rinabi Tanamal, Theresia Ratih Dewi Saputri, Y. Soekamto
In order to prepare graduates with work readiness in the IT industry, specifically in mobile apps development, one of its ways is to create a Developer Academy where final year students are prepared in an intensive program for two consecutive semesters to learn the stages of mobile apps development. To ensure the quality of participants in the Developer Academy, a set of selection procedures needs to be prepared, consisting of Aptitude Test, Portfolio Showcase, and Individual Interview. The problem arises when applicants are far more than the class capacity. Hence selection procedures take a longer time. The Developer Academy registration team record showed a ratio of 1: 12, which overburdens the team when it comes to selecting the applicants. More effective procedures are needed with the help of machine learning tools to help with decision making. This study aims to produce a prediction model for developer academy applicants. Several classification algorithms such as k-nearest neighbors, support vector machine, decision tree, and random forest were analyzed. Data was collected from 527 valid applicant's data which submit complete documents based on due date, other applicants who did not submit complete documents were not included in the analysis. Preliminary findings from the study show that the Support Vector Machine algorithm performs best with an accuracy of 86% and this score was then increased by applying oversampling and kernel tricks to get an accuracy rate of 98%. Hence it can be concluded that the prediction model has excellent performance. Keywords-developer academy, artificial intelligence, machine learning, support vector machine, data science, classification
{"title":"An Implementation of Support Vector Machine Classification for Developer Academy Acceptance Prediction Model","authors":"Trianggoro Wiradinata, Rinabi Tanamal, Theresia Ratih Dewi Saputri, Y. Soekamto","doi":"10.1109/ICITech50181.2021.9590146","DOIUrl":"https://doi.org/10.1109/ICITech50181.2021.9590146","url":null,"abstract":"In order to prepare graduates with work readiness in the IT industry, specifically in mobile apps development, one of its ways is to create a Developer Academy where final year students are prepared in an intensive program for two consecutive semesters to learn the stages of mobile apps development. To ensure the quality of participants in the Developer Academy, a set of selection procedures needs to be prepared, consisting of Aptitude Test, Portfolio Showcase, and Individual Interview. The problem arises when applicants are far more than the class capacity. Hence selection procedures take a longer time. The Developer Academy registration team record showed a ratio of 1: 12, which overburdens the team when it comes to selecting the applicants. More effective procedures are needed with the help of machine learning tools to help with decision making. This study aims to produce a prediction model for developer academy applicants. Several classification algorithms such as k-nearest neighbors, support vector machine, decision tree, and random forest were analyzed. Data was collected from 527 valid applicant's data which submit complete documents based on due date, other applicants who did not submit complete documents were not included in the analysis. Preliminary findings from the study show that the Support Vector Machine algorithm performs best with an accuracy of 86% and this score was then increased by applying oversampling and kernel tricks to get an accuracy rate of 98%. Hence it can be concluded that the prediction model has excellent performance. Keywords-developer academy, artificial intelligence, machine learning, support vector machine, data science, classification","PeriodicalId":429669,"journal":{"name":"2021 2nd International Conference on Innovative and Creative Information Technology (ICITech)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121574681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-23DOI: 10.1109/ICITech50181.2021.9590171
E. Ardhianto, Widiyanto Tri Handoko, Hari Murti, Rara Redjeki
Documents for some entities are confidential and important, so security is required. Encryption with Covertext and Reordering (ECR) is a text-based document security model. ECR uses a random key to generate the ciphertext. The ECR's random key was selected using the human-generated method. This research aims to increase the level of document security based on the ECR mechanism. This paper proposes a new method by using a random key in a permutated table. The random key is generated automatically by a function. The entropy is used as a measurement of the security level of the encrypted documents. The experimental shows that the permutated table inside the ECR provides better entropy values. It implies a better security level. The use of permutated tables also makes it easier to use ECR to secure documents.
对于某些实体来说,文档是机密和重要的,因此需要安全性。ECR (Covertext Encryption with Reordering)是一种基于文本的文档安全模型。ECR使用一个随机密钥来生成密文。使用人工生成的方法选择ECR的随机密钥。本研究旨在提高基于ECR机制的文档安全性。本文提出了一种在置换表中使用随机键的新方法。随机密钥由一个函数自动生成。熵被用作加密文档安全级别的度量。实验表明,ECR内的置换表能提供更好的熵值。这意味着更高的安全级别。排列表的使用也使得使用ECR保护文档变得更加容易。
{"title":"Encryption with Covertext and Reordering using Permutated Table and Random Function","authors":"E. Ardhianto, Widiyanto Tri Handoko, Hari Murti, Rara Redjeki","doi":"10.1109/ICITech50181.2021.9590171","DOIUrl":"https://doi.org/10.1109/ICITech50181.2021.9590171","url":null,"abstract":"Documents for some entities are confidential and important, so security is required. Encryption with Covertext and Reordering (ECR) is a text-based document security model. ECR uses a random key to generate the ciphertext. The ECR's random key was selected using the human-generated method. This research aims to increase the level of document security based on the ECR mechanism. This paper proposes a new method by using a random key in a permutated table. The random key is generated automatically by a function. The entropy is used as a measurement of the security level of the encrypted documents. The experimental shows that the permutated table inside the ECR provides better entropy values. It implies a better security level. The use of permutated tables also makes it easier to use ECR to secure documents.","PeriodicalId":429669,"journal":{"name":"2021 2nd International Conference on Innovative and Creative Information Technology (ICITech)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131445053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-23DOI: 10.1109/ICITech50181.2021.9590177
A. Elmahy, S. Aly, F. Elkhwsky
The early detection of cancer stage is a crucial step for effective treatment. In contrast to traditional approaches, RNA -Seq is the current state of the art technique for gene expression estimation. RNA -Seq data have been used in research and in production as input data for several classification and prediction models in many disease including cancer staging. We present a novel cancer stage prediction approach based on gene expression data. Our approach is based on Weighted Graph Convolution Networks (GCN). GCN is the application of deep learning back-propagation on graph structures. In this work, we used correlation between genes to generate a gene network graph. A neural network model with weighted graph convolution layer was trained to predict the cancer stage for cancer patients. We employed the Kidney Renal Clear Cell Carcinoma dataset (TCGA-KIRC) from the Human Cancer Genome Atlas (TCGA) to predict the cancer stage for each patient. The TCGA-KIRC dataset includes 4 cancer stages, I, II, III, and IV. We generated a binary classification problem where stages I and II are classified as “early cancer stage” and stages III and IV are classified as “late cancer stage”. We compared our approach to the state of the art approaches such as random forest and support vector machine. Our approach achieved an accuracy of 82% which outperformed existing approaches with more than 3% increase.
{"title":"Cancer Stage Prediction From Gene Expression Data Using Weighted Graph Convolution Network","authors":"A. Elmahy, S. Aly, F. Elkhwsky","doi":"10.1109/ICITech50181.2021.9590177","DOIUrl":"https://doi.org/10.1109/ICITech50181.2021.9590177","url":null,"abstract":"The early detection of cancer stage is a crucial step for effective treatment. In contrast to traditional approaches, RNA -Seq is the current state of the art technique for gene expression estimation. RNA -Seq data have been used in research and in production as input data for several classification and prediction models in many disease including cancer staging. We present a novel cancer stage prediction approach based on gene expression data. Our approach is based on Weighted Graph Convolution Networks (GCN). GCN is the application of deep learning back-propagation on graph structures. In this work, we used correlation between genes to generate a gene network graph. A neural network model with weighted graph convolution layer was trained to predict the cancer stage for cancer patients. We employed the Kidney Renal Clear Cell Carcinoma dataset (TCGA-KIRC) from the Human Cancer Genome Atlas (TCGA) to predict the cancer stage for each patient. The TCGA-KIRC dataset includes 4 cancer stages, I, II, III, and IV. We generated a binary classification problem where stages I and II are classified as “early cancer stage” and stages III and IV are classified as “late cancer stage”. We compared our approach to the state of the art approaches such as random forest and support vector machine. Our approach achieved an accuracy of 82% which outperformed existing approaches with more than 3% increase.","PeriodicalId":429669,"journal":{"name":"2021 2nd International Conference on Innovative and Creative Information Technology (ICITech)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128212820","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-23DOI: 10.1109/ICITech50181.2021.9590124
C. Hayat
Acne is a skin disorder all humans almost have, both women and men. How to treat acne properly determines how quick you will be acne-free. Still, the dependency on doctors in conducting skin physical examinations to make an early diagnosis remains high. Therefore, this research was conducted by developing K-Means Clustering model for early diagnosis of types of acne experienced by the patients. The K-Means clustering algorithm were as follows: (a) determining the total clusters; (b) allocating the data into groups, randomly; (c) calculating the centroid in each cluster; (d) allocating each data to the centroid (e) repeating the centroid calculation if there were still data moving fro one cluster to another. The results of the performance of the K-means model would produce types of acne with four output categories according to the severity of acne such as: no acne (16.12%), mild acne (29.03%), moderate acne (32.25%), and severe acne (22.60%).
{"title":"Enhanced K-Means Clustering Approach for Diagnosis Types of Acne","authors":"C. Hayat","doi":"10.1109/ICITech50181.2021.9590124","DOIUrl":"https://doi.org/10.1109/ICITech50181.2021.9590124","url":null,"abstract":"Acne is a skin disorder all humans almost have, both women and men. How to treat acne properly determines how quick you will be acne-free. Still, the dependency on doctors in conducting skin physical examinations to make an early diagnosis remains high. Therefore, this research was conducted by developing K-Means Clustering model for early diagnosis of types of acne experienced by the patients. The K-Means clustering algorithm were as follows: (a) determining the total clusters; (b) allocating the data into groups, randomly; (c) calculating the centroid in each cluster; (d) allocating each data to the centroid (e) repeating the centroid calculation if there were still data moving fro one cluster to another. The results of the performance of the K-means model would produce types of acne with four output categories according to the severity of acne such as: no acne (16.12%), mild acne (29.03%), moderate acne (32.25%), and severe acne (22.60%).","PeriodicalId":429669,"journal":{"name":"2021 2nd International Conference on Innovative and Creative Information Technology (ICITech)","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115068886","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-23DOI: 10.1109/ICITech50181.2021.9590143
Celvin Yota Priyanto, Hendry, H. Purnomo
Land monitoring is important in agriculture. Early warning information regarding the land condition enable farmers to respond quickly when anomaly condition occures. However, identifying anomaly of land condition is not a simple task. In this research, a model of anomaly detection for land monitoring system is proposed. Raw data collected from land monitoring sensors is used as the dataset. Isolation Forest is used to transform the unlabeled data into labeled data. The labeled dataset is then used to create anomaly detection model using Long Short-Term Memory (LSTM) autoencoder. The experiments results show that the Isolation Forest has the potential to label data. The LSTM autoencoder has the accuracy 0.95 precision 0.96, recall 0.99 and flscore 0.97.
{"title":"Combination of Isolation Forest and LSTM Autoencoder for Anomaly Detection","authors":"Celvin Yota Priyanto, Hendry, H. Purnomo","doi":"10.1109/ICITech50181.2021.9590143","DOIUrl":"https://doi.org/10.1109/ICITech50181.2021.9590143","url":null,"abstract":"Land monitoring is important in agriculture. Early warning information regarding the land condition enable farmers to respond quickly when anomaly condition occures. However, identifying anomaly of land condition is not a simple task. In this research, a model of anomaly detection for land monitoring system is proposed. Raw data collected from land monitoring sensors is used as the dataset. Isolation Forest is used to transform the unlabeled data into labeled data. The labeled dataset is then used to create anomaly detection model using Long Short-Term Memory (LSTM) autoencoder. The experiments results show that the Isolation Forest has the potential to label data. The LSTM autoencoder has the accuracy 0.95 precision 0.96, recall 0.99 and flscore 0.97.","PeriodicalId":429669,"journal":{"name":"2021 2nd International Conference on Innovative and Creative Information Technology (ICITech)","volume":"62 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120936242","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-23DOI: 10.1109/ICITech50181.2021.9590175
Erick Fernando, Pandapotan Siagian
Digital image processing aims to improve the quality of an original image so that it can display an image that is relatively better than the original image, so as to obtain the detailed information needed for an analysis need. Quality degradation Biomedical images in computerization usually experience changes in brightness, blur, and contrast stretching. Because of the deterioration in the image quality, doctors and patients cannot obtain the information needed for analysis, therefore the need for biomedical image processing techniques with spatial domain methods to improve the quality and details of information in the image so that it helps doctors in diagnosing. Data analysis is biomedical images like X-ray images, CT-Scan (Computer Tomographic Scan) images, and USG (Ultrasound Graphic) images. A focused study on improving image quality can be done with digital spatial domain image processing techniques. Digital domain image processing is digital image processing based on the manipulation of pixel values in the image directly, in the form of point processing and mask processing. Process testing for each image for Blur level is CT-scan images with a value of 0.98436, ultrasound images with a value of 0.9875, X-ray images with a value of 0.9836. This value is blur level value get near to 1 which means the image becomes clearer. The analysis results prove that the spatial domain method can clarify the object image in this study. The biomedical image quality processing model is proven to be able to improve the quality of the image that is declining due to the digitization process and assist the user in analyzing the biomedical image.
{"title":"Biomedical Image Quality Improvement with Spatial Domains","authors":"Erick Fernando, Pandapotan Siagian","doi":"10.1109/ICITech50181.2021.9590175","DOIUrl":"https://doi.org/10.1109/ICITech50181.2021.9590175","url":null,"abstract":"Digital image processing aims to improve the quality of an original image so that it can display an image that is relatively better than the original image, so as to obtain the detailed information needed for an analysis need. Quality degradation Biomedical images in computerization usually experience changes in brightness, blur, and contrast stretching. Because of the deterioration in the image quality, doctors and patients cannot obtain the information needed for analysis, therefore the need for biomedical image processing techniques with spatial domain methods to improve the quality and details of information in the image so that it helps doctors in diagnosing. Data analysis is biomedical images like X-ray images, CT-Scan (Computer Tomographic Scan) images, and USG (Ultrasound Graphic) images. A focused study on improving image quality can be done with digital spatial domain image processing techniques. Digital domain image processing is digital image processing based on the manipulation of pixel values in the image directly, in the form of point processing and mask processing. Process testing for each image for Blur level is CT-scan images with a value of 0.98436, ultrasound images with a value of 0.9875, X-ray images with a value of 0.9836. This value is blur level value get near to 1 which means the image becomes clearer. The analysis results prove that the spatial domain method can clarify the object image in this study. The biomedical image quality processing model is proven to be able to improve the quality of the image that is declining due to the digitization process and assist the user in analyzing the biomedical image.","PeriodicalId":429669,"journal":{"name":"2021 2nd International Conference on Innovative and Creative Information Technology (ICITech)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129627959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-23DOI: 10.1109/ICITech50181.2021.9590172
Arif Ridho Lubis, S. Prayudani, Y. Fatmi, Al-Khowarizmi, Julham, Y. Y. Lase
Time Series is a superior method of predicting the future based on past data. Time series are also used in various businesses to make forecasts for profit. Time series data provide data visualization with statistical explanations necessary for business decisions. One of the businesses that operates for the needs of all elements is the Crude Palm Oil (CPO) commodity industry. Where the CPO price can be forecast using time series because it uses a series at the time available in fact. In this paper, 599 data of CPO price data were crawled from September 10, 2019 to April 30, 2021, then divided into 560 training data and 39 testing data. In this case, testing was carried out in measuring accuracy using MAPE in forecasting CPO prices. with time series getting 0.01781302% while accuracy is also measured by MAPE combined with detection rate gaining a percentage of 0.501031843%. This indicates that when forecasting with time series on CPO price data, the best accuracy is calculated using MAPE without any combination with other techniques.
{"title":"Measurement of Detection Rate Accuracy in Forecasting Crude Palm Oil Production using Fuzzy Time Series","authors":"Arif Ridho Lubis, S. Prayudani, Y. Fatmi, Al-Khowarizmi, Julham, Y. Y. Lase","doi":"10.1109/ICITech50181.2021.9590172","DOIUrl":"https://doi.org/10.1109/ICITech50181.2021.9590172","url":null,"abstract":"Time Series is a superior method of predicting the future based on past data. Time series are also used in various businesses to make forecasts for profit. Time series data provide data visualization with statistical explanations necessary for business decisions. One of the businesses that operates for the needs of all elements is the Crude Palm Oil (CPO) commodity industry. Where the CPO price can be forecast using time series because it uses a series at the time available in fact. In this paper, 599 data of CPO price data were crawled from September 10, 2019 to April 30, 2021, then divided into 560 training data and 39 testing data. In this case, testing was carried out in measuring accuracy using MAPE in forecasting CPO prices. with time series getting 0.01781302% while accuracy is also measured by MAPE combined with detection rate gaining a percentage of 0.501031843%. This indicates that when forecasting with time series on CPO price data, the best accuracy is calculated using MAPE without any combination with other techniques.","PeriodicalId":429669,"journal":{"name":"2021 2nd International Conference on Innovative and Creative Information Technology (ICITech)","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130667888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-23DOI: 10.1109/icitech50181.2021.9590106
{"title":"Alphabetical Author Index","authors":"","doi":"10.1109/icitech50181.2021.9590106","DOIUrl":"https://doi.org/10.1109/icitech50181.2021.9590106","url":null,"abstract":"","PeriodicalId":429669,"journal":{"name":"2021 2nd International Conference on Innovative and Creative Information Technology (ICITech)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133477759","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-23DOI: 10.1109/ICITech50181.2021.9590116
Jessi Febria, Christine Dewi, Evangs Mailoa
The demand for pharmaceuticals increased due to the second wave that happening in Indonesia. Pharmaceuticals distribution for hospitals in Central Java is developed in this paper, which is categorized as a Capacitated Vehicle Routing Problem (CVRP). The proposed method is using the same size K-means and Greedy algorithm as an initial route. The result of the initial route is a clustered route for each vehicle. Then, using Google OR-tools metaheuristics Guided Local Search, each cluster was re-optimized. This paper, proven that using the initial route has the effect of reducing runtime by 98.91% when compared to without the initial route. This is because using initial routes with the same size K-means means breaking the problem into parts, then using the Greedy algorithm can reduce the number of possible routes. However, the total distance increased by 8.11% because no cluster member is allowed to move to another cluster.
{"title":"Comparison of Capacitated Vehicle Routing Problem Using Initial Route and Without Initial Route for Pharmaceuticals Distribution","authors":"Jessi Febria, Christine Dewi, Evangs Mailoa","doi":"10.1109/ICITech50181.2021.9590116","DOIUrl":"https://doi.org/10.1109/ICITech50181.2021.9590116","url":null,"abstract":"The demand for pharmaceuticals increased due to the second wave that happening in Indonesia. Pharmaceuticals distribution for hospitals in Central Java is developed in this paper, which is categorized as a Capacitated Vehicle Routing Problem (CVRP). The proposed method is using the same size K-means and Greedy algorithm as an initial route. The result of the initial route is a clustered route for each vehicle. Then, using Google OR-tools metaheuristics Guided Local Search, each cluster was re-optimized. This paper, proven that using the initial route has the effect of reducing runtime by 98.91% when compared to without the initial route. This is because using initial routes with the same size K-means means breaking the problem into parts, then using the Greedy algorithm can reduce the number of possible routes. However, the total distance increased by 8.11% because no cluster member is allowed to move to another cluster.","PeriodicalId":429669,"journal":{"name":"2021 2nd International Conference on Innovative and Creative Information Technology (ICITech)","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114404529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-23DOI: 10.1109/ICITech50181.2021.9590163
K. Hartomo, Arnoldi Dea Tesa Hernanda
Natural disasters always cause loss of lives and properties in all regions of Indonesia. So far, coordination and disaster mitigation have not been integrated properly between disaster agencies and stakeholders so that negative impacts cannot be minimized. This research proposes an automated disaster mitigation system to increase the effectiveness of emergency response and aid after a natural disaster strikes, which the validity of the system's performance will be tested using the End-User Computing Satisfaction (EUCS) method. The disaster mitigation information system receives input of disaster event data which disaster information is then distributed them to agencies and stakeholders, and finally stakeholders can provide a response of aid through the system so that the synchronization and integration of the aid is well organized. The result of the validity test using the End-User Computing Satisfaction method shows a test score of 3.4 out of 4 for a total satisfaction score, thus this result indicates that the built disaster mitigation information system is in the category of satisfied and acceptable system by users.
{"title":"Design and Validity Test of The Disaster Mitigation Information System Using EUCS Method","authors":"K. Hartomo, Arnoldi Dea Tesa Hernanda","doi":"10.1109/ICITech50181.2021.9590163","DOIUrl":"https://doi.org/10.1109/ICITech50181.2021.9590163","url":null,"abstract":"Natural disasters always cause loss of lives and properties in all regions of Indonesia. So far, coordination and disaster mitigation have not been integrated properly between disaster agencies and stakeholders so that negative impacts cannot be minimized. This research proposes an automated disaster mitigation system to increase the effectiveness of emergency response and aid after a natural disaster strikes, which the validity of the system's performance will be tested using the End-User Computing Satisfaction (EUCS) method. The disaster mitigation information system receives input of disaster event data which disaster information is then distributed them to agencies and stakeholders, and finally stakeholders can provide a response of aid through the system so that the synchronization and integration of the aid is well organized. The result of the validity test using the End-User Computing Satisfaction method shows a test score of 3.4 out of 4 for a total satisfaction score, thus this result indicates that the built disaster mitigation information system is in the category of satisfied and acceptable system by users.","PeriodicalId":429669,"journal":{"name":"2021 2nd International Conference on Innovative and Creative Information Technology (ICITech)","volume":"28 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116786946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}