Pub Date : 2021-08-26DOI: 10.35671/telematika.v14i2.1317
Omar Muhammad Altoumi Alsyaibani, Ema Utami, A. D. Hartanto
Development of computer network has changed human lives in many ways. Currently, everyone is connected to each other from everywhere. Information can be accessed easily. This massive development has to be followed by good security system. Intrusion Detection System is important device in network security which capable of monitoring hardware and software in computer network. Many researchers have developed Intrusion Detection System continuously and have faced many challenges, for instance: low detection of accuracy, emergence of new types malicious traffic and error detection rate. Researchers have tried to overcome these problems in many ways, one of them is using Deep Learning which is a branch of Machine Learning for developing Intrusion Detection System and it will be discussed in this paper. Machine Learning itself is a branch of Artificial Intelligence which is growing rapidly in the moment. Several researches have showed that Machine Learning and Deep Learning provide very promising results for developing Intrusion Detection System. This paper will present an overview about Intrusion Detection System in general, Deep Learning model which is often used by researchers, available datasets and challenges which will be faced ahead by researchers
{"title":"Survey on Deep Learning Based Intrusion Detection System","authors":"Omar Muhammad Altoumi Alsyaibani, Ema Utami, A. D. Hartanto","doi":"10.35671/telematika.v14i2.1317","DOIUrl":"https://doi.org/10.35671/telematika.v14i2.1317","url":null,"abstract":"Development of computer network has changed human lives in many ways. Currently, everyone is connected to each other from everywhere. Information can be accessed easily. This massive development has to be followed by good security system. Intrusion Detection System is important device in network security which capable of monitoring hardware and software in computer network. Many researchers have developed Intrusion Detection System continuously and have faced many challenges, for instance: low detection of accuracy, emergence of new types malicious traffic and error detection rate. Researchers have tried to overcome these problems in many ways, one of them is using Deep Learning which is a branch of Machine Learning for developing Intrusion Detection System and it will be discussed in this paper. Machine Learning itself is a branch of Artificial Intelligence which is growing rapidly in the moment. Several researches have showed that Machine Learning and Deep Learning provide very promising results for developing Intrusion Detection System. This paper will present an overview about Intrusion Detection System in general, Deep Learning model which is often used by researchers, available datasets and challenges which will be faced ahead by researchers","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77594144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Some of Cilacap people live in coastal areas as fishermen who utilize the seafood to meet the needs of life. One of the fishermen supporters in the cruise is the information of Meteorological, Climatological, and Geophysical Agency (BMKG). This information is important for safety such as wind speed and wave height. For addressing the problem, research is conducted to determine the sailing schedule of fishermen using data mining method with the website based. The proposed method is using Support Vector Machine (SVM) classification algorithm. This research uses data from BMKG Cilacap from 2015 until 2017. Test data is part of data that is 30% randomly fetched from the overall data used. From model testing, get value with performance results from datasets that generate accuracy of 88%, 87% precision and 89% recall. This solution is followed by constructing the website in order to easy to access of sailing information. Therefore, the researcher created a website of fisherman sailing scheduling system based on SVM algorithm.
{"title":"Data Mining Method to Determine a Fisherman's Sailing Schedule Using Website","authors":"Dwi Ayu Mutiara, Alung Susli, Didit Suhartono, Dani Arifudin, Imam Tahyudin","doi":"10.35671/TELEMATIKA.V14I2.1193","DOIUrl":"https://doi.org/10.35671/TELEMATIKA.V14I2.1193","url":null,"abstract":"Some of Cilacap people live in coastal areas as fishermen who utilize the seafood to meet the needs of life. One of the fishermen supporters in the cruise is the information of Meteorological, Climatological, and Geophysical Agency (BMKG). This information is important for safety such as wind speed and wave height. For addressing the problem, research is conducted to determine the sailing schedule of fishermen using data mining method with the website based. The proposed method is using Support Vector Machine (SVM) classification algorithm. This research uses data from BMKG Cilacap from 2015 until 2017. Test data is part of data that is 30% randomly fetched from the overall data used. From model testing, get value with performance results from datasets that generate accuracy of 88%, 87% precision and 89% recall. This solution is followed by constructing the website in order to easy to access of sailing information. Therefore, the researcher created a website of fisherman sailing scheduling system based on SVM algorithm.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87206133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-26DOI: 10.35671/telematika.v14i2.1225
M. D. R. Wahyudi, A. Fatwanto, Usfita Kiftiyani, M. G. Wonoseto
Online media news portals have the advantage of speed in conveying information on any events that occur in society. One way to know what a story is about is from the title. The headline is a headline that introduces the reader's knowledge about the news content to be described. From these headlines, you can search for the main topics or trends that are being discussed. It takes a fast and efficient method to find out what topics are trending in the news. One method that can be used to overcome this problem is topic modeling. Topic modeling is necessary to help users quickly understand recent issues. One of the algorithms in topic modeling is Latent Dirichlet Allocation (LDA). The stages of this research began with data collection, preprocessing, forming n-grams, dictionary representation, weighting, validating the topic model, forming the topic model, and the results of topic modeling. The results of modeling LDA topics in news headlines taken from www.detik.com for 8 months (March-October 2020) during the COVID-19 pandemic showed that the best number of topics produced each month were 3 topics dominated by news topics about corona cases, positive corona, positive COVID, COVID-19 with an accuracy of 0.824 (82.4%). The resulting precision and recall values indicate that the two values are identical, so this is ideal for an information retrieval system.
{"title":"Topic Modeling of Online Media News Titles during COVID-19 Emergency Response in Indonesia Using the Latent Dirichlet Allocation (LDA) Algorithm","authors":"M. D. R. Wahyudi, A. Fatwanto, Usfita Kiftiyani, M. G. Wonoseto","doi":"10.35671/telematika.v14i2.1225","DOIUrl":"https://doi.org/10.35671/telematika.v14i2.1225","url":null,"abstract":"Online media news portals have the advantage of speed in conveying information on any events that occur in society. One way to know what a story is about is from the title. The headline is a headline that introduces the reader's knowledge about the news content to be described. From these headlines, you can search for the main topics or trends that are being discussed. It takes a fast and efficient method to find out what topics are trending in the news. One method that can be used to overcome this problem is topic modeling. Topic modeling is necessary to help users quickly understand recent issues. One of the algorithms in topic modeling is Latent Dirichlet Allocation (LDA). The stages of this research began with data collection, preprocessing, forming n-grams, dictionary representation, weighting, validating the topic model, forming the topic model, and the results of topic modeling. The results of modeling LDA topics in news headlines taken from www.detik.com for 8 months (March-October 2020) during the COVID-19 pandemic showed that the best number of topics produced each month were 3 topics dominated by news topics about corona cases, positive corona, positive COVID, COVID-19 with an accuracy of 0.824 (82.4%). The resulting precision and recall values indicate that the two values are identical, so this is ideal for an information retrieval system.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84501049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-16DOI: 10.31315/TELEMATIKA.V18I1.4495
H. Heriyanto
Purpose:Select the right features on the frame for good accuracyDesign/methodology/approach:Extraction of Mel Frequency Cepstral Coefficient (MFCC) Features and Selection of Dominant Weight Normalized (DWN) FeaturesFindings/result:The accuracy results show that the MFCC method with the 9th frame selection has a higher accuracy rate of 85% compared to other frames.Originality/value/state of the art:Selection of the appropriate features on the frame.
设计/方法/方法:Mel Frequency Cepstral Coefficient (MFCC) feature的提取和Dominant Weight Normalized (DWN) feature的选择结果/结果:准确率结果表明,第9帧选择的MFCC方法与其他帧相比准确率更高,达到85%。原创性/价值/艺术水平:在框架上选择适当的特征。
{"title":"Good Morning to Good Night Greeting Classification Using Mel Frequency Cepstral Coefficient (MFCC) Feature Extraction and Frame Feature Selection","authors":"H. Heriyanto","doi":"10.31315/TELEMATIKA.V18I1.4495","DOIUrl":"https://doi.org/10.31315/TELEMATIKA.V18I1.4495","url":null,"abstract":"Purpose:Select the right features on the frame for good accuracyDesign/methodology/approach:Extraction of Mel Frequency Cepstral Coefficient (MFCC) Features and Selection of Dominant Weight Normalized (DWN) FeaturesFindings/result:The accuracy results show that the MFCC method with the 9th frame selection has a higher accuracy rate of 85% compared to other frames.Originality/value/state of the art:Selection of the appropriate features on the frame.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83477534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-16DOI: 10.31315/TELEMATIKA.V18I1.4493
S. Khomsah
Purpose: This study aims to determine the accuracy of sentiment classification using the Random-Forest, and Word2Vec Skip-gram used for features extraction. Word2Vec is one of the effective methods that represent aspects of word meaning and, it helps to improve sentiment classification accuracy.Methodology: The research data consists of 31947 comments downloaded from the YouTube channel for the 2019 presidential election debate. The dataset consists of 23612 positive comments and 8335 negative comments. To avoid bias, we balance the amount of positive and negative data using oversampling. We use Skip-gram to extract features word. The Skip-gram will produce several features around the word the context (input word). Each of these features contains a weight. The feature weight of each comment is calculated by an average-based approach. Random Forest is used to building a sentiment classification model. Experiments were carried out several times with different epoch and window parameters. The performance of each model experiment was measured by cross-validation.Result: Experiments using epochs 1, 5, and 20 and window sizes of 3, 5, and 10, obtain the average accuracy of the model is 90.1% to 91%. However, the results of testing reach an accuracy between 88.77% and 89.05%. But accuracy of the model little bit lower than the accuracy model also was not significant. In the next experiment, it recommended using the number of epochs and the window size greater than twenty epochs and ten windows, so that accuracy increasing significantly.Value: The number of epoch and window sizes on the Skip-Gram affect accuracy. More and more epoch and window sizes affect increasing the accuracy.
{"title":"Sentiment Analysis On YouTube Comments Using Word2Vec and Random Forest","authors":"S. Khomsah","doi":"10.31315/TELEMATIKA.V18I1.4493","DOIUrl":"https://doi.org/10.31315/TELEMATIKA.V18I1.4493","url":null,"abstract":"Purpose: This study aims to determine the accuracy of sentiment classification using the Random-Forest, and Word2Vec Skip-gram used for features extraction. Word2Vec is one of the effective methods that represent aspects of word meaning and, it helps to improve sentiment classification accuracy.Methodology: The research data consists of 31947 comments downloaded from the YouTube channel for the 2019 presidential election debate. The dataset consists of 23612 positive comments and 8335 negative comments. To avoid bias, we balance the amount of positive and negative data using oversampling. We use Skip-gram to extract features word. The Skip-gram will produce several features around the word the context (input word). Each of these features contains a weight. The feature weight of each comment is calculated by an average-based approach. Random Forest is used to building a sentiment classification model. Experiments were carried out several times with different epoch and window parameters. The performance of each model experiment was measured by cross-validation.Result: Experiments using epochs 1, 5, and 20 and window sizes of 3, 5, and 10, obtain the average accuracy of the model is 90.1% to 91%. However, the results of testing reach an accuracy between 88.77% and 89.05%. But accuracy of the model little bit lower than the accuracy model also was not significant. In the next experiment, it recommended using the number of epochs and the window size greater than twenty epochs and ten windows, so that accuracy increasing significantly.Value: The number of epoch and window sizes on the Skip-Gram affect accuracy. More and more epoch and window sizes affect increasing the accuracy.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89879005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-16DOI: 10.31315/TELEMATIKA.V18I1.4025
Rismiyati Rismiyati, Ardytha Luthfiarta
Purpose : This study aims to differentiate the quality of salak fruit with machine learning. Salak is classified into two classes, good and bad class. Design/methodology/approach : The algorithm used in this research is transfer learning with the VGG16 architecture. Data set used in this research consist of 370 images of salak, 190 from good class and 180 from bad class. The image is preprocessed by resizing and normalizing pixel value in the image. Preprocessed images is split into 80% training data and 20% testing data. Training data is trained by using pretrained VGG16 model. The parameters that are changed during the training are epoch, momentum, and learning rate. The resulting model is then used for testing. The accuracy, precision and recall is monitored to determine the best model to classify the images. Findings/result : The highest accuracy obtained from this study is 95.83%. This accuracy is obtained by using a learning rate = 0.0001 and momentum 0.9. The precision and recall for this model is 97.2 and 94.6. Originality/value/state of the art : The use of transfer learning to classify salak which never been used before.
{"title":"VGG16 Transfer Learning Architecture for Salak Fruit Quality Classification","authors":"Rismiyati Rismiyati, Ardytha Luthfiarta","doi":"10.31315/TELEMATIKA.V18I1.4025","DOIUrl":"https://doi.org/10.31315/TELEMATIKA.V18I1.4025","url":null,"abstract":"Purpose : This study aims to differentiate the quality of salak fruit with machine learning. Salak is classified into two classes, good and bad class. Design/methodology/approach : The algorithm used in this research is transfer learning with the VGG16 architecture. Data set used in this research consist of 370 images of salak, 190 from good class and 180 from bad class. The image is preprocessed by resizing and normalizing pixel value in the image. Preprocessed images is split into 80% training data and 20% testing data. Training data is trained by using pretrained VGG16 model. The parameters that are changed during the training are epoch, momentum, and learning rate. The resulting model is then used for testing. The accuracy, precision and recall is monitored to determine the best model to classify the images. Findings/result : The highest accuracy obtained from this study is 95.83%. This accuracy is obtained by using a learning rate = 0.0001 and momentum 0.9. The precision and recall for this model is 97.2 and 94.6. Originality/value/state of the art : The use of transfer learning to classify salak which never been used before.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73510055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-16DOI: 10.31315/TELEMATIKA.V18I1.3195
Eko Dwi Nugroho
Informasi Artikel Abstract Received: 21 January 2020 Revised: 31 March 2020 Accepted: 27 January 2021 Published: 28 February 2021 Purpose: This research makes an application to simplify the Boolean function using Quine-McCluskey, because length of the Boolean function complicates the digital circuit, so that it can be simplified by finding other functions that are equivalent and more efficient, making digital circuits easier, and less cost. Design/methodology/approach: The canonical form is Sum-of-Product/Product-of-Sum and is in the form of a file, while the output is in the form of a raw and in the form of a file. Applications can receive the same minterm/maksterm input and do not have to be sequential. The method has been applied by Idempoten, Petrick, Selection Sort, and classification, so that simplification is maximized. Findings/result: As a result, the application can simplify more optimally than previous studies, can receive the same minterm/maksterm input, Product-of-Sum canonical form, and has been verified by simplifying and calculating manually. Originality/value/state of the art: Research that applies the petrick method to applications combined with being able to receive the same minterm/maksterm input has never been done before. The calculation is only up to the intermediate stage of the Quine-McCluskey method or has not been able to receive the same minterm/maksterm input.
{"title":"Development of Applications for Simplification of Boolean Functions using Quine-McCluskey Method","authors":"Eko Dwi Nugroho","doi":"10.31315/TELEMATIKA.V18I1.3195","DOIUrl":"https://doi.org/10.31315/TELEMATIKA.V18I1.3195","url":null,"abstract":"Informasi Artikel Abstract Received: 21 January 2020 Revised: 31 March 2020 Accepted: 27 January 2021 Published: 28 February 2021 Purpose: This research makes an application to simplify the Boolean function using Quine-McCluskey, because length of the Boolean function complicates the digital circuit, so that it can be simplified by finding other functions that are equivalent and more efficient, making digital circuits easier, and less cost. Design/methodology/approach: The canonical form is Sum-of-Product/Product-of-Sum and is in the form of a file, while the output is in the form of a raw and in the form of a file. Applications can receive the same minterm/maksterm input and do not have to be sequential. The method has been applied by Idempoten, Petrick, Selection Sort, and classification, so that simplification is maximized. Findings/result: As a result, the application can simplify more optimally than previous studies, can receive the same minterm/maksterm input, Product-of-Sum canonical form, and has been verified by simplifying and calculating manually. Originality/value/state of the art: Research that applies the petrick method to applications combined with being able to receive the same minterm/maksterm input has never been done before. The calculation is only up to the intermediate stage of the Quine-McCluskey method or has not been able to receive the same minterm/maksterm input.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81606271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-16DOI: 10.31315/TELEMATIKA.V18I1.4341
A. Aribowo, S. Khomsah
Information and news about Covid-19 received various responses from social media users, including Twitter users. Changes in netizen opinion from time to time are interesting to analyze, especially about the patterns of public sentiment and emotions contained in these opinions. Sentiment and emotional conditions can illustrate the public's response to the Covid-19 pandemic in Indonesia. This research has two objectives, first to reveal the types of public emotions that emerged during the Covid-19 pandemic in Indonesia. Second, reveal the topics or words that appear most frequently in each emotion class. There are seven types of emotions to be detected, namely anger, fear, disgust, sadness, surprise, joy, and trust. The dataset used is Indonesian-language tweets, which were downloaded from April to August 2020. The method used for the extraction of emotional features is the lexicon-based method using the EmoLex dictionary. The result obtained is a monthly graph of public emotional conditions related to the Covid-19 pandemic in the dataset.
{"title":"Implementation Of Text Mining For Emotion Detection Using The Lexicon Method (Case Study: Tweets About Covid-19)","authors":"A. Aribowo, S. Khomsah","doi":"10.31315/TELEMATIKA.V18I1.4341","DOIUrl":"https://doi.org/10.31315/TELEMATIKA.V18I1.4341","url":null,"abstract":"Information and news about Covid-19 received various responses from social media users, including Twitter users. Changes in netizen opinion from time to time are interesting to analyze, especially about the patterns of public sentiment and emotions contained in these opinions. Sentiment and emotional conditions can illustrate the public's response to the Covid-19 pandemic in Indonesia. This research has two objectives, first to reveal the types of public emotions that emerged during the Covid-19 pandemic in Indonesia. Second, reveal the topics or words that appear most frequently in each emotion class. There are seven types of emotions to be detected, namely anger, fear, disgust, sadness, surprise, joy, and trust. The dataset used is Indonesian-language tweets, which were downloaded from April to August 2020. The method used for the extraction of emotional features is the lexicon-based method using the EmoLex dictionary. The result obtained is a monthly graph of public emotional conditions related to the Covid-19 pandemic in the dataset.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72489318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-16DOI: 10.31315/TELEMATIKA.V18I1.3968
Pipit Febriana Dewi, Anis Susila Abadi
Informasi Artikel Abstract Received: 24 November 2020 Revised: 12 January 2021 Accepted: 28 January 2021 Published: 28 February 2021 Purpose: to find out what factors cause lecturers and students to adopt and refuse to adopt Google Classroom as a means of E-Learning at the Yogyakarta Nahdlatul Ulama University. Design/methodology/approach: This research was conducted using a qualitative approach to get the meaning of a phenomenon. The Innovation Diffusion Theory is used as the basis for this research to find out how the role of Google Classroom as a means of E-Learning and how the suitability of Google Classroom as a means of E-Learning at Nahdlatul Ulama University Yogyakarta. Findings/result: the factors of adoption consisted of synchronizing the students and lecturers’ email with Google, integrating other Google features, making an efficiency of fund, time and place, finding an alternative way for e-learning, evaluating the facilities, filling the teaching and learning process, communicating between the lecturers and students, and knowing the lateness of submitting assignment. Besides, there were some factors of rejection such as the limited ownership of electronic media, limited knowledge, Internet connection, and no attendance facility Originality/value/state of the art: The factors of lecturers and students are adopt and refuse to adopt Google Classroom as a means of E-Learning at Nahdlatul Ulama University Yogyakarta.
摘要收稿日期:2020年11月24日修订日期:2021年1月12日接收日期:2021年1月28日发布日期:2021年2月28日目的:找出导致讲师和学生采用和拒绝采用谷歌课堂作为电子学习在日惹大学的手段的因素。设计/方法/方法:本研究采用定性方法来获得现象的含义。本研究以创新扩散理论为基础,探讨谷歌课堂作为电子学习手段的作用,以及谷歌课堂作为电子学习手段在日惹国立乌拉玛大学的适用性。发现/结果:采用的因素包括学生和讲师的电子邮件与谷歌同步,整合谷歌的其他功能,使资金,时间和地点的效率,寻找电子学习的替代方式,评估设施,填补教学过程,老师和学生之间的沟通,知道提交作业的延迟。此外,还有一些拒绝的因素,如有限的电子媒体所有权,有限的知识,互联网连接,和没有出席设施原创性/价值/最先进的:在Nahdlatul Ulama University Yogyakarta,教师和学生的因素是采用和拒绝采用Google Classroom作为E-Learning的手段。
{"title":"The Determinant Analysis of the Utilization of Google Classroom as the E-Learning Facility in Yogyakarta Nahdlatul Ulama University","authors":"Pipit Febriana Dewi, Anis Susila Abadi","doi":"10.31315/TELEMATIKA.V18I1.3968","DOIUrl":"https://doi.org/10.31315/TELEMATIKA.V18I1.3968","url":null,"abstract":"Informasi Artikel Abstract Received: 24 November 2020 Revised: 12 January 2021 Accepted: 28 January 2021 Published: 28 February 2021 Purpose: to find out what factors cause lecturers and students to adopt and refuse to adopt Google Classroom as a means of E-Learning at the Yogyakarta Nahdlatul Ulama University. Design/methodology/approach: This research was conducted using a qualitative approach to get the meaning of a phenomenon. The Innovation Diffusion Theory is used as the basis for this research to find out how the role of Google Classroom as a means of E-Learning and how the suitability of Google Classroom as a means of E-Learning at Nahdlatul Ulama University Yogyakarta. Findings/result: the factors of adoption consisted of synchronizing the students and lecturers’ email with Google, integrating other Google features, making an efficiency of fund, time and place, finding an alternative way for e-learning, evaluating the facilities, filling the teaching and learning process, communicating between the lecturers and students, and knowing the lateness of submitting assignment. Besides, there were some factors of rejection such as the limited ownership of electronic media, limited knowledge, Internet connection, and no attendance facility Originality/value/state of the art: The factors of lecturers and students are adopt and refuse to adopt Google Classroom as a means of E-Learning at Nahdlatul Ulama University Yogyakarta.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81332895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Informasi Artikel Abstract Received: 12 December 2020 Revised: 12 January 2021 Accepted: 30 January 2021 Published: 28 February 2021 Purpose: Knowing the best alpha value from the data for each type of drug with various alpha parameters in the Double Exponential Smoothing Method and knowing the prediction results on each type of drug data at the Condong Catur Hospital pharmacy. Design/methodology/approach: Applying the Double Exponential Smoothing method with alpha parameters 0.1; 0.2; 0.3; 0.4; 0.5; 0.6; 0.7; 0.8; 0.9 Findings/result: The test results on a system built using test data show that the double exponential smoothing method provides accuracy below 20% by producing a different Alpha (α) for each type of drug because the trend patterns in each drug sale are different at the Pharmacy at the Condong Catur Hospital. . Originality/value/state of the art: Based on previous research, this study has similar characteristics such as themes, parameters and methods used. Previous researchers used smoothing methods such as Double Exponential Smoothing in predicting stock / sales of goods
{"title":"Prediction Of Drug Sales Using Methods Forecasting Double Exponential Smoothing (Case Study : Hospital Pharmacy of Condong Catur)","authors":"Annesa Maya Sabarina, Heru Cahya Rustamaji, Hidayatulah Himawan","doi":"10.31315/TELEMATIKA.V18I1.4586","DOIUrl":"https://doi.org/10.31315/TELEMATIKA.V18I1.4586","url":null,"abstract":"Informasi Artikel Abstract Received: 12 December 2020 Revised: 12 January 2021 Accepted: 30 January 2021 Published: 28 February 2021 Purpose: Knowing the best alpha value from the data for each type of drug with various alpha parameters in the Double Exponential Smoothing Method and knowing the prediction results on each type of drug data at the Condong Catur Hospital pharmacy. Design/methodology/approach: Applying the Double Exponential Smoothing method with alpha parameters 0.1; 0.2; 0.3; 0.4; 0.5; 0.6; 0.7; 0.8; 0.9 Findings/result: The test results on a system built using test data show that the double exponential smoothing method provides accuracy below 20% by producing a different Alpha (α) for each type of drug because the trend patterns in each drug sale are different at the Pharmacy at the Condong Catur Hospital. . Originality/value/state of the art: Based on previous research, this study has similar characteristics such as themes, parameters and methods used. Previous researchers used smoothing methods such as Double Exponential Smoothing in predicting stock / sales of goods","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77477467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}