首页 > 最新文献

Telematika最新文献

英文 中文
Survey on Deep Learning Based Intrusion Detection System 基于深度学习的入侵检测系统研究
Pub Date : 2021-08-26 DOI: 10.35671/telematika.v14i2.1317
Omar Muhammad Altoumi Alsyaibani, Ema Utami, A. D. Hartanto
Development of computer network has changed human lives in many ways. Currently, everyone is connected to each other from everywhere. Information can be accessed easily. This massive development has to be followed by good security system. Intrusion Detection System is important device in network security which capable of monitoring hardware and software in computer network. Many researchers have developed Intrusion Detection System continuously and have faced many challenges, for instance: low detection of accuracy, emergence of new types malicious traffic and error detection rate. Researchers have tried to overcome these problems in many ways, one of them is using Deep Learning which is a branch of Machine Learning for developing Intrusion Detection System and it will be discussed in this paper. Machine Learning itself is a branch of Artificial Intelligence which is growing rapidly in the moment. Several researches have showed that Machine Learning and Deep Learning provide very promising results for developing Intrusion Detection System. This paper will present an overview about Intrusion Detection System in general, Deep Learning model which is often used by researchers, available datasets and challenges which will be faced ahead by researchers
计算机网络的发展在许多方面改变了人类的生活。目前,每个人都从任何地方相互连接。信息可以很容易地获取。这种巨大的发展必须伴随着良好的安全系统。入侵检测系统是网络安全中的重要设备,它能够对计算机网络中的硬件和软件进行监控。许多研究人员不断开发入侵检测系统,但面临着检测准确率低、新型恶意流量不断涌现、检测错误率高等问题。研究人员已经尝试了许多方法来克服这些问题,其中一种方法是使用深度学习来开发入侵检测系统,这是机器学习的一个分支,本文将对此进行讨论。机器学习本身是人工智能的一个分支,目前正在迅速发展。一些研究表明,机器学习和深度学习为开发入侵检测系统提供了非常有前途的结果。本文将介绍入侵检测系统的总体概况、研究人员常用的深度学习模型、可用的数据集以及研究人员将面临的挑战
{"title":"Survey on Deep Learning Based Intrusion Detection System","authors":"Omar Muhammad Altoumi Alsyaibani, Ema Utami, A. D. Hartanto","doi":"10.35671/telematika.v14i2.1317","DOIUrl":"https://doi.org/10.35671/telematika.v14i2.1317","url":null,"abstract":"Development of computer network has changed human lives in many ways. Currently, everyone is connected to each other from everywhere. Information can be accessed easily. This massive development has to be followed by good security system. Intrusion Detection System is important device in network security which capable of monitoring hardware and software in computer network. Many researchers have developed Intrusion Detection System continuously and have faced many challenges, for instance: low detection of accuracy, emergence of new types malicious traffic and error detection rate. Researchers have tried to overcome these problems in many ways, one of them is using Deep Learning which is a branch of Machine Learning for developing Intrusion Detection System and it will be discussed in this paper. Machine Learning itself is a branch of Artificial Intelligence which is growing rapidly in the moment. Several researches have showed that Machine Learning and Deep Learning provide very promising results for developing Intrusion Detection System. This paper will present an overview about Intrusion Detection System in general, Deep Learning model which is often used by researchers, available datasets and challenges which will be faced ahead by researchers","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77594144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Data Mining Method to Determine a Fisherman's Sailing Schedule Using Website 基于网站的数据挖掘方法确定渔民的航行计划
Pub Date : 2021-08-26 DOI: 10.35671/TELEMATIKA.V14I2.1193
Dwi Ayu Mutiara, Alung Susli, Didit Suhartono, Dani Arifudin, Imam Tahyudin
Some of Cilacap people live in coastal areas as fishermen who utilize the seafood to meet the needs of life. One of the fishermen supporters in the cruise is the information of Meteorological, Climatological, and Geophysical Agency (BMKG). This information is important for safety such as wind speed and wave height. For addressing the problem, research is conducted to determine the sailing schedule of fishermen using data mining method with the website based. The proposed method is using Support Vector Machine (SVM) classification algorithm. This research uses data from BMKG Cilacap from 2015 until 2017. Test data is part of data that is 30% randomly fetched from the overall data used. From model testing, get value with performance results from datasets that generate accuracy of 88%, 87% precision and 89% recall. This solution is followed by constructing the website in order to easy to access of sailing information. Therefore, the researcher created a website of fisherman sailing scheduling system based on SVM algorithm.
一些Cilacap人生活在沿海地区,作为渔民,他们利用海鲜来满足生活的需要。渔民的支持者之一是气象、气候和地球物理局(BMKG)的信息。这些信息对安全很重要,比如风速和浪高。针对这一问题,采用基于网站的数据挖掘方法对渔民的航行日程进行了确定研究。该方法采用支持向量机(SVM)分类算法。本研究使用BMKG Cilacap从2015年到2017年的数据。测试数据是从所使用的总体数据中随机提取的30%的数据的一部分。从模型测试中,从数据集获得具有性能结果的值,生成88%的准确性,87%的精度和89%的召回率。在此解决方案的基础上,建立了航海信息网站,方便用户获取航海信息。因此,研究人员创建了一个基于SVM算法的网站渔民航行调度系统。
{"title":"Data Mining Method to Determine a Fisherman's Sailing Schedule Using Website","authors":"Dwi Ayu Mutiara, Alung Susli, Didit Suhartono, Dani Arifudin, Imam Tahyudin","doi":"10.35671/TELEMATIKA.V14I2.1193","DOIUrl":"https://doi.org/10.35671/TELEMATIKA.V14I2.1193","url":null,"abstract":"Some of Cilacap people live in coastal areas as fishermen who utilize the seafood to meet the needs of life. One of the fishermen supporters in the cruise is the information of Meteorological, Climatological, and Geophysical Agency (BMKG). This information is important for safety such as wind speed and wave height. For addressing the problem, research is conducted to determine the sailing schedule of fishermen using data mining method with the website based. The proposed method is using Support Vector Machine (SVM) classification algorithm. This research uses data from BMKG Cilacap from 2015 until 2017. Test data is part of data that is 30% randomly fetched from the overall data used. From model testing, get value with performance results from datasets that generate accuracy of 88%, 87% precision and 89% recall. This solution is followed by constructing the website in order to easy to access of sailing information. Therefore, the researcher created a website of fisherman sailing scheduling system based on SVM algorithm.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87206133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Topic Modeling of Online Media News Titles during COVID-19 Emergency Response in Indonesia Using the Latent Dirichlet Allocation (LDA) Algorithm 基于潜在狄利克雷分配(LDA)算法的印度尼西亚COVID-19应急响应期间在线媒体新闻标题主题建模
Pub Date : 2021-08-26 DOI: 10.35671/telematika.v14i2.1225
M. D. R. Wahyudi, A. Fatwanto, Usfita Kiftiyani, M. G. Wonoseto
Online media news portals have the advantage of speed in conveying information on any events that occur in society. One way to know what a story is about is from the title. The headline is a headline that introduces the reader's knowledge about the news content to be described. From these headlines, you can search for the main topics or trends that are being discussed. It takes a fast and efficient method to find out what topics are trending in the news. One method that can be used to overcome this problem is topic modeling. Topic modeling is necessary to help users quickly understand recent issues. One of the algorithms in topic modeling is Latent Dirichlet Allocation (LDA). The stages of this research began with data collection, preprocessing, forming n-grams, dictionary representation, weighting, validating the topic model, forming the topic model, and the results of topic modeling. The results of modeling LDA topics in news headlines taken from www.detik.com for 8 months (March-October 2020) during the COVID-19 pandemic showed that the best number of topics produced each month were 3 topics dominated by news topics about corona cases, positive corona, positive COVID, COVID-19 with an accuracy of 0.824 (82.4%). The resulting precision and recall values indicate that the two values are identical, so this is ideal for an information retrieval system.
网络媒体新闻门户网站在传递社会上发生的任何事件的信息方面具有速度的优势。了解故事内容的一个方法是看标题。标题是介绍读者对要描述的新闻内容的了解的标题。从这些标题中,您可以搜索正在讨论的主题或趋势。它需要一种快速有效的方法来找出新闻中的热门话题。可以用来克服这个问题的一种方法是主题建模。主题建模对于帮助用户快速理解最近的问题是必要的。潜在狄利克雷分配(Latent Dirichlet Allocation, LDA)是主题建模中的一种算法。本研究的阶段从数据收集、预处理、形成n图、字典表示、加权、验证主题模型、形成主题模型、得出主题建模结果开始。对2019冠状病毒病大流行期间8个月(2020年3月- 10月)从www.detik.com获取的新闻标题进行LDA主题建模结果显示,每个月产生的主题数量最多的是3个主题,以冠状病毒病例、阳性冠状病毒、阳性冠状病毒、COVID-19新闻主题为主,准确率为0.824(82.4%)。结果的精度和召回值表明两个值是相同的,因此这对于信息检索系统来说是理想的。
{"title":"Topic Modeling of Online Media News Titles during COVID-19 Emergency Response in Indonesia Using the Latent Dirichlet Allocation (LDA) Algorithm","authors":"M. D. R. Wahyudi, A. Fatwanto, Usfita Kiftiyani, M. G. Wonoseto","doi":"10.35671/telematika.v14i2.1225","DOIUrl":"https://doi.org/10.35671/telematika.v14i2.1225","url":null,"abstract":"Online media news portals have the advantage of speed in conveying information on any events that occur in society. One way to know what a story is about is from the title. The headline is a headline that introduces the reader's knowledge about the news content to be described. From these headlines, you can search for the main topics or trends that are being discussed. It takes a fast and efficient method to find out what topics are trending in the news. One method that can be used to overcome this problem is topic modeling. Topic modeling is necessary to help users quickly understand recent issues. One of the algorithms in topic modeling is Latent Dirichlet Allocation (LDA). The stages of this research began with data collection, preprocessing, forming n-grams, dictionary representation, weighting, validating the topic model, forming the topic model, and the results of topic modeling. The results of modeling LDA topics in news headlines taken from www.detik.com for 8 months (March-October 2020) during the COVID-19 pandemic showed that the best number of topics produced each month were 3 topics dominated by news topics about corona cases, positive corona, positive COVID, COVID-19 with an accuracy of 0.824 (82.4%). The resulting precision and recall values indicate that the two values are identical, so this is ideal for an information retrieval system.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84501049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Good Morning to Good Night Greeting Classification Using Mel Frequency Cepstral Coefficient (MFCC) Feature Extraction and Frame Feature Selection 基于Mel频率倒谱系数(MFCC)特征提取和帧特征选择的早安晚安问候语分类
Pub Date : 2021-03-16 DOI: 10.31315/TELEMATIKA.V18I1.4495
H. Heriyanto
Purpose:Select the right features on the frame for good accuracyDesign/methodology/approach:Extraction of Mel Frequency Cepstral Coefficient (MFCC) Features and Selection of Dominant Weight Normalized (DWN) FeaturesFindings/result:The accuracy results show that the MFCC method with the 9th frame selection has a higher accuracy rate of 85% compared to other frames.Originality/value/state of the art:Selection of the appropriate features on the frame.
设计/方法/方法:Mel Frequency Cepstral Coefficient (MFCC) feature的提取和Dominant Weight Normalized (DWN) feature的选择结果/结果:准确率结果表明,第9帧选择的MFCC方法与其他帧相比准确率更高,达到85%。原创性/价值/艺术水平:在框架上选择适当的特征。
{"title":"Good Morning to Good Night Greeting Classification Using Mel Frequency Cepstral Coefficient (MFCC) Feature Extraction and Frame Feature Selection","authors":"H. Heriyanto","doi":"10.31315/TELEMATIKA.V18I1.4495","DOIUrl":"https://doi.org/10.31315/TELEMATIKA.V18I1.4495","url":null,"abstract":"Purpose:Select the right features on the frame for good accuracyDesign/methodology/approach:Extraction of Mel Frequency Cepstral Coefficient (MFCC) Features and Selection of Dominant Weight Normalized (DWN) FeaturesFindings/result:The accuracy results show that the MFCC method with the 9th frame selection has a higher accuracy rate of 85% compared to other frames.Originality/value/state of the art:Selection of the appropriate features on the frame.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83477534","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Sentiment Analysis On YouTube Comments Using Word2Vec and Random Forest 使用Word2Vec和随机森林对YouTube评论进行情感分析
Pub Date : 2021-03-16 DOI: 10.31315/TELEMATIKA.V18I1.4493
S. Khomsah
Purpose: This study aims to determine the accuracy of sentiment classification using the Random-Forest, and Word2Vec Skip-gram used for features extraction. Word2Vec is one of the effective methods that represent aspects of word meaning and, it helps to improve sentiment classification accuracy.Methodology: The research data consists of 31947 comments downloaded from the YouTube channel for the 2019 presidential election debate. The dataset consists of 23612 positive comments and 8335 negative comments. To avoid bias, we balance the amount of positive and negative data using oversampling. We use Skip-gram to extract features word. The Skip-gram will produce several features around the word the context (input word). Each of these features contains a weight. The feature weight of each comment is calculated by an average-based approach. Random Forest is used to building a sentiment classification model. Experiments were carried out several times with different epoch and window parameters. The performance of each model experiment was measured by cross-validation.Result: Experiments using epochs 1, 5, and 20 and window sizes of 3, 5, and 10, obtain the average accuracy of the model is 90.1% to 91%. However, the results of testing reach an accuracy between 88.77% and 89.05%. But accuracy of the model little bit lower than the accuracy model also was not significant. In the next experiment, it recommended using the number of epochs and the window size greater than twenty epochs and ten windows, so that accuracy increasing significantly.Value: The number of epoch and window sizes on the Skip-Gram affect accuracy. More and more epoch and window sizes affect increasing the accuracy.
目的:本研究旨在确定使用Random-Forest和Word2Vec Skip-gram进行特征提取的情感分类的准确性。Word2Vec是表征词义方面的有效方法之一,有助于提高情感分类的准确率。研究方法:研究数据包括从2019年总统选举辩论YouTube频道下载的31947条评论。该数据集由23612条正面评论和8335条负面评论组成。为了避免偏差,我们使用过采样来平衡正数据和负数据的数量。我们使用Skip-gram提取特征词。Skip-gram将围绕上下文(输入词)的单词生成几个特征。每个特征都包含一个权重。每个评论的特征权重通过基于平均的方法计算。利用随机森林建立情感分类模型。用不同的历元和窗口参数进行了多次实验。每个模型实验的性能通过交叉验证来衡量。结果:使用epoch 1、5、20,window size 3、5、10进行实验,模型的平均准确率为90.1% ~ 91%。但检测结果的准确率在88.77% ~ 89.05%之间。但模型的精度略低于模型的精度也不显著。在接下来的实验中,建议使用大于20个epoch和10个窗口的epoch数和窗口大小,这样可以显著提高精度。值:Skip-Gram上epoch的数目和窗口大小影响精度。越来越多的历元和窗口大小影响精度的提高。
{"title":"Sentiment Analysis On YouTube Comments Using Word2Vec and Random Forest","authors":"S. Khomsah","doi":"10.31315/TELEMATIKA.V18I1.4493","DOIUrl":"https://doi.org/10.31315/TELEMATIKA.V18I1.4493","url":null,"abstract":"Purpose: This study aims to determine the accuracy of sentiment classification using the Random-Forest, and Word2Vec Skip-gram used for features extraction. Word2Vec is one of the effective methods that represent aspects of word meaning and, it helps to improve sentiment classification accuracy.Methodology: The research data consists of 31947 comments downloaded from the YouTube channel for the 2019 presidential election debate. The dataset consists of 23612 positive comments and 8335 negative comments. To avoid bias, we balance the amount of positive and negative data using oversampling. We use Skip-gram to extract features word. The Skip-gram will produce several features around the word the context (input word). Each of these features contains a weight. The feature weight of each comment is calculated by an average-based approach. Random Forest is used to building a sentiment classification model. Experiments were carried out several times with different epoch and window parameters. The performance of each model experiment was measured by cross-validation.Result: Experiments using epochs 1, 5, and 20 and window sizes of 3, 5, and 10, obtain the average accuracy of the model is 90.1% to 91%. However, the results of testing reach an accuracy between 88.77% and 89.05%. But accuracy of the model little bit lower than the accuracy model also was not significant. In the next experiment, it recommended using the number of epochs and the window size greater than twenty epochs and ten windows, so that accuracy increasing significantly.Value: The number of epoch and window sizes on the Skip-Gram affect accuracy. More and more epoch and window sizes affect increasing the accuracy.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89879005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
VGG16 Transfer Learning Architecture for Salak Fruit Quality Classification Salak水果品质分类的迁移学习体系结构
Pub Date : 2021-03-16 DOI: 10.31315/TELEMATIKA.V18I1.4025
Rismiyati Rismiyati, Ardytha Luthfiarta
Purpose : This study aims to differentiate the quality of salak fruit with machine learning. Salak is classified into two classes, good and bad class. Design/methodology/approach : The algorithm used in this research is transfer learning with the VGG16 architecture. Data set used in this research consist of 370 images of salak, 190 from good class and 180 from bad class. The image is preprocessed by resizing and normalizing pixel value in the image. Preprocessed images is split into 80% training data and 20% testing data. Training data is trained by using pretrained VGG16 model. The parameters that are changed during the training are epoch, momentum, and learning rate. The resulting model is then used for testing. The accuracy, precision and recall is monitored to determine the best model to classify the images. Findings/result : The highest accuracy obtained from this study is 95.83%. This accuracy is obtained by using a learning rate = 0.0001 and momentum 0.9. The precision and recall for this model is 97.2 and 94.6. Originality/value/state of the art : The use of transfer learning to classify salak which never been used before.
目的:利用机器学习技术对salak水果的品质进行鉴别。沙拉分为两类,好的和坏的。设计/方法/方法:本研究中使用的算法是基于VGG16架构的迁移学习。本研究使用的数据集包括370张salak的图像,其中190张来自好类,180张来自差类。对图像进行预处理,调整图像的大小并对图像中的像素值进行归一化。预处理后的图像分为80%的训练数据和20%的测试数据。使用预训练的VGG16模型对训练数据进行训练。在训练过程中改变的参数是历元、动量和学习率。然后将得到的模型用于测试。对准确率、精密度和召回率进行监测,以确定最佳的图像分类模型。结果:本研究获得的最高准确率为95.83%。这种精度是通过使用学习率= 0.0001和动量0.9来获得的。该模型的精度和召回率分别为97.2和94.6。原创性/价值/艺术水平:运用迁移学习对以前从未使用过的salak进行分类。
{"title":"VGG16 Transfer Learning Architecture for Salak Fruit Quality Classification","authors":"Rismiyati Rismiyati, Ardytha Luthfiarta","doi":"10.31315/TELEMATIKA.V18I1.4025","DOIUrl":"https://doi.org/10.31315/TELEMATIKA.V18I1.4025","url":null,"abstract":"Purpose : This study aims to differentiate the quality of salak fruit with machine learning. Salak is classified into two classes, good and bad class. Design/methodology/approach : The algorithm used in this research is transfer learning with the VGG16 architecture. Data set used in this research consist of 370 images of salak, 190 from good class and 180 from bad class. The image is preprocessed by resizing and normalizing pixel value in the image. Preprocessed images is split into 80% training data and 20% testing data. Training data is trained by using pretrained VGG16 model. The parameters that are changed during the training are epoch, momentum, and learning rate. The resulting model is then used for testing. The accuracy, precision and recall is monitored to determine the best model to classify the images. Findings/result : The highest accuracy obtained from this study is 95.83%. This accuracy is obtained by using a learning rate = 0.0001 and momentum 0.9. The precision and recall for this model is 97.2 and 94.6. Originality/value/state of the art : The use of transfer learning to classify salak which never been used before.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73510055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 20
Development of Applications for Simplification of Boolean Functions using Quine-McCluskey Method 用Quine-McCluskey方法简化布尔函数的应用进展
Pub Date : 2021-03-16 DOI: 10.31315/TELEMATIKA.V18I1.3195
Eko Dwi Nugroho
Informasi Artikel Abstract Received: 21 January 2020 Revised: 31 March 2020 Accepted: 27 January 2021 Published: 28 February 2021 Purpose: This research makes an application to simplify the Boolean function using Quine-McCluskey, because length of the Boolean function complicates the digital circuit, so that it can be simplified by finding other functions that are equivalent and more efficient, making digital circuits easier, and less cost. Design/methodology/approach: The canonical form is Sum-of-Product/Product-of-Sum and is in the form of a file, while the output is in the form of a raw and in the form of a file. Applications can receive the same minterm/maksterm input and do not have to be sequential. The method has been applied by Idempoten, Petrick, Selection Sort, and classification, so that simplification is maximized. Findings/result: As a result, the application can simplify more optimally than previous studies, can receive the same minterm/maksterm input, Product-of-Sum canonical form, and has been verified by simplifying and calculating manually. Originality/value/state of the art: Research that applies the petrick method to applications combined with being able to receive the same minterm/maksterm input has never been done before. The calculation is only up to the intermediate stage of the Quine-McCluskey method or has not been able to receive the same minterm/maksterm input.
摘要收稿日期:2020年1月21日修订日期:2020年3月31日接受日期:2021年1月27日发布日期:2021年2月28日目的:本研究提出了一个应用程序,以简化布尔函数使用奎因-麦克卢斯基,因为布尔函数的长度使数字电路复杂化,因此可以通过寻找其他函数来简化它是等效的和更有效的,使数字电路更容易,成本更低。设计/方法/方法:规范形式是Sum-of-Product/Product-of-Sum,以文件的形式呈现,而输出则以raw的形式呈现,并以文件的形式呈现。应用程序可以接收相同的minterm/maksterm输入,而不必是顺序的。该方法已被Idempoten、Petrick、Selection Sort和classification应用,从而最大限度地简化了。发现/结果:与以往的研究相比,该应用程序可以更优化地简化,可以接收相同的minterm/maksterm输入,Product-of-Sum规范形式,并通过人工简化和计算得到验证。原创性/价值/技术水平:将petrick方法应用于应用程序并能够获得相同的短期/短期输入的研究以前从未做过。计算只到Quine-McCluskey方法的中间阶段,或者不能接收到相同的最小项/最大项输入。
{"title":"Development of Applications for Simplification of Boolean Functions using Quine-McCluskey Method","authors":"Eko Dwi Nugroho","doi":"10.31315/TELEMATIKA.V18I1.3195","DOIUrl":"https://doi.org/10.31315/TELEMATIKA.V18I1.3195","url":null,"abstract":"Informasi Artikel Abstract Received: 21 January 2020 Revised: 31 March 2020 Accepted: 27 January 2021 Published: 28 February 2021 Purpose: This research makes an application to simplify the Boolean function using Quine-McCluskey, because length of the Boolean function complicates the digital circuit, so that it can be simplified by finding other functions that are equivalent and more efficient, making digital circuits easier, and less cost. Design/methodology/approach: The canonical form is Sum-of-Product/Product-of-Sum and is in the form of a file, while the output is in the form of a raw and in the form of a file. Applications can receive the same minterm/maksterm input and do not have to be sequential. The method has been applied by Idempoten, Petrick, Selection Sort, and classification, so that simplification is maximized. Findings/result: As a result, the application can simplify more optimally than previous studies, can receive the same minterm/maksterm input, Product-of-Sum canonical form, and has been verified by simplifying and calculating manually. Originality/value/state of the art: Research that applies the petrick method to applications combined with being able to receive the same minterm/maksterm input has never been done before. The calculation is only up to the intermediate stage of the Quine-McCluskey method or has not been able to receive the same minterm/maksterm input.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81606271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Implementation Of Text Mining For Emotion Detection Using The Lexicon Method (Case Study: Tweets About Covid-19) 使用Lexicon方法实现情感检测的文本挖掘(案例研究:关于Covid-19的推文)
Pub Date : 2021-03-16 DOI: 10.31315/TELEMATIKA.V18I1.4341
A. Aribowo, S. Khomsah
Information and news about Covid-19 received various responses from social media users, including Twitter users. Changes in netizen opinion from time to time are interesting to analyze, especially about the patterns of public sentiment and emotions contained in these opinions. Sentiment and emotional conditions can illustrate the public's response to the Covid-19 pandemic in Indonesia. This research has two objectives, first to reveal the types of public emotions that emerged during the Covid-19 pandemic in Indonesia. Second, reveal the topics or words that appear most frequently in each emotion class. There are seven types of emotions to be detected, namely anger, fear, disgust, sadness, surprise, joy, and trust. The dataset used is Indonesian-language tweets, which were downloaded from April to August 2020. The method used for the extraction of emotional features is the lexicon-based method using the EmoLex dictionary. The result obtained is a monthly graph of public emotional conditions related to the Covid-19 pandemic in the dataset.
有关Covid-19的信息和新闻收到了包括推特用户在内的社交媒体用户的各种回应。网民意见的不时变化值得分析,尤其是这些意见中所包含的民意和情绪的模式。情绪和情绪状况可以说明公众对印度尼西亚Covid-19大流行的反应。这项研究有两个目标,首先是揭示在印度尼西亚Covid-19大流行期间出现的公众情绪类型。其次,揭示每个情感类中出现最频繁的话题或单词。有七种情绪需要检测,即愤怒、恐惧、厌恶、悲伤、惊讶、喜悦和信任。使用的数据集是印尼语的推文,这些推文是从2020年4月到8月下载的。情感特征提取的方法是基于EmoLex词典的基于词典的方法。获得的结果是数据集中与Covid-19大流行相关的公众情绪状况的月度图表。
{"title":"Implementation Of Text Mining For Emotion Detection Using The Lexicon Method (Case Study: Tweets About Covid-19)","authors":"A. Aribowo, S. Khomsah","doi":"10.31315/TELEMATIKA.V18I1.4341","DOIUrl":"https://doi.org/10.31315/TELEMATIKA.V18I1.4341","url":null,"abstract":"Information and news about Covid-19 received various responses from social media users, including Twitter users. Changes in netizen opinion from time to time are interesting to analyze, especially about the patterns of public sentiment and emotions contained in these opinions. Sentiment and emotional conditions can illustrate the public's response to the Covid-19 pandemic in Indonesia. This research has two objectives, first to reveal the types of public emotions that emerged during the Covid-19 pandemic in Indonesia. Second, reveal the topics or words that appear most frequently in each emotion class. There are seven types of emotions to be detected, namely anger, fear, disgust, sadness, surprise, joy, and trust. The dataset used is Indonesian-language tweets, which were downloaded from April to August 2020. The method used for the extraction of emotional features is the lexicon-based method using the EmoLex dictionary. The result obtained is a monthly graph of public emotional conditions related to the Covid-19 pandemic in the dataset.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72489318","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
The Determinant Analysis of the Utilization of Google Classroom as the E-Learning Facility in Yogyakarta Nahdlatul Ulama University 日惹大学使用Google教室作为电子学习设施的决定因素分析
Pub Date : 2021-03-16 DOI: 10.31315/TELEMATIKA.V18I1.3968
Pipit Febriana Dewi, Anis Susila Abadi
Informasi Artikel Abstract Received: 24 November 2020 Revised: 12 January 2021 Accepted: 28 January 2021 Published: 28 February 2021 Purpose: to find out what factors cause lecturers and students to adopt and refuse to adopt Google Classroom as a means of E-Learning at the Yogyakarta Nahdlatul Ulama University. Design/methodology/approach: This research was conducted using a qualitative approach to get the meaning of a phenomenon. The Innovation Diffusion Theory is used as the basis for this research to find out how the role of Google Classroom as a means of E-Learning and how the suitability of Google Classroom as a means of E-Learning at Nahdlatul Ulama University Yogyakarta. Findings/result: the factors of adoption consisted of synchronizing the students and lecturers’ email with Google, integrating other Google features, making an efficiency of fund, time and place, finding an alternative way for e-learning, evaluating the facilities, filling the teaching and learning process, communicating between the lecturers and students, and knowing the lateness of submitting assignment. Besides, there were some factors of rejection such as the limited ownership of electronic media, limited knowledge, Internet connection, and no attendance facility Originality/value/state of the art: The factors of lecturers and students are adopt and refuse to adopt Google Classroom as a means of E-Learning at Nahdlatul Ulama University Yogyakarta.
摘要收稿日期:2020年11月24日修订日期:2021年1月12日接收日期:2021年1月28日发布日期:2021年2月28日目的:找出导致讲师和学生采用和拒绝采用谷歌课堂作为电子学习在日惹大学的手段的因素。设计/方法/方法:本研究采用定性方法来获得现象的含义。本研究以创新扩散理论为基础,探讨谷歌课堂作为电子学习手段的作用,以及谷歌课堂作为电子学习手段在日惹国立乌拉玛大学的适用性。发现/结果:采用的因素包括学生和讲师的电子邮件与谷歌同步,整合谷歌的其他功能,使资金,时间和地点的效率,寻找电子学习的替代方式,评估设施,填补教学过程,老师和学生之间的沟通,知道提交作业的延迟。此外,还有一些拒绝的因素,如有限的电子媒体所有权,有限的知识,互联网连接,和没有出席设施原创性/价值/最先进的:在Nahdlatul Ulama University Yogyakarta,教师和学生的因素是采用和拒绝采用Google Classroom作为E-Learning的手段。
{"title":"The Determinant Analysis of the Utilization of Google Classroom as the E-Learning Facility in Yogyakarta Nahdlatul Ulama University","authors":"Pipit Febriana Dewi, Anis Susila Abadi","doi":"10.31315/TELEMATIKA.V18I1.3968","DOIUrl":"https://doi.org/10.31315/TELEMATIKA.V18I1.3968","url":null,"abstract":"Informasi Artikel Abstract Received: 24 November 2020 Revised: 12 January 2021 Accepted: 28 January 2021 Published: 28 February 2021 Purpose: to find out what factors cause lecturers and students to adopt and refuse to adopt Google Classroom as a means of E-Learning at the Yogyakarta Nahdlatul Ulama University. Design/methodology/approach: This research was conducted using a qualitative approach to get the meaning of a phenomenon. The Innovation Diffusion Theory is used as the basis for this research to find out how the role of Google Classroom as a means of E-Learning and how the suitability of Google Classroom as a means of E-Learning at Nahdlatul Ulama University Yogyakarta. Findings/result: the factors of adoption consisted of synchronizing the students and lecturers’ email with Google, integrating other Google features, making an efficiency of fund, time and place, finding an alternative way for e-learning, evaluating the facilities, filling the teaching and learning process, communicating between the lecturers and students, and knowing the lateness of submitting assignment. Besides, there were some factors of rejection such as the limited ownership of electronic media, limited knowledge, Internet connection, and no attendance facility Originality/value/state of the art: The factors of lecturers and students are adopt and refuse to adopt Google Classroom as a means of E-Learning at Nahdlatul Ulama University Yogyakarta.","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81332895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Prediction Of Drug Sales Using Methods Forecasting Double Exponential Smoothing (Case Study : Hospital Pharmacy of Condong Catur) 基于双指数平滑预测方法的药品销售预测(以广东医院药房为例)
Pub Date : 2021-03-16 DOI: 10.31315/TELEMATIKA.V18I1.4586
Annesa Maya Sabarina, Heru Cahya Rustamaji, Hidayatulah Himawan
Informasi Artikel Abstract Received: 12 December 2020 Revised: 12 January 2021 Accepted: 30 January 2021 Published: 28 February 2021 Purpose: Knowing the best alpha value from the data for each type of drug with various alpha parameters in the Double Exponential Smoothing Method and knowing the prediction results on each type of drug data at the Condong Catur Hospital pharmacy. Design/methodology/approach: Applying the Double Exponential Smoothing method with alpha parameters 0.1; 0.2; 0.3; 0.4; 0.5; 0.6; 0.7; 0.8; 0.9 Findings/result: The test results on a system built using test data show that the double exponential smoothing method provides accuracy below 20% by producing a different Alpha (α) for each type of drug because the trend patterns in each drug sale are different at the Pharmacy at the Condong Catur Hospital. . Originality/value/state of the art: Based on previous research, this study has similar characteristics such as themes, parameters and methods used. Previous researchers used smoothing methods such as Double Exponential Smoothing in predicting stock / sales of goods
摘要收稿日期:2020年12月12日修稿日期:2021年1月12日接受日期:2021年1月30日发布日期:2021年2月28日目的:利用双指数平滑法对不同alpha参数的各类药物数据求出最佳alpha值,了解Condong Catur医院药房各类药物数据的预测结果。设计/方法/途径:采用alpha参数为0.1的双指数平滑法;0.2;0.3;0.4;0.5;0.6;0.7;0.8;0.9发现/结果:在使用测试数据构建的系统上的测试结果表明,由于Condong Catur医院药房每种药物销售的趋势模式不同,双指数平滑法对每种药物产生不同的Alpha (α),准确度低于20%。原创性/价值/技术水平:在前人研究的基础上,本研究在主题、参数、方法等方面具有相似的特点。以前的研究人员使用平滑方法,如双指数平滑来预测商品的库存/销售
{"title":"Prediction Of Drug Sales Using Methods Forecasting Double Exponential Smoothing (Case Study : Hospital Pharmacy of Condong Catur)","authors":"Annesa Maya Sabarina, Heru Cahya Rustamaji, Hidayatulah Himawan","doi":"10.31315/TELEMATIKA.V18I1.4586","DOIUrl":"https://doi.org/10.31315/TELEMATIKA.V18I1.4586","url":null,"abstract":"Informasi Artikel Abstract Received: 12 December 2020 Revised: 12 January 2021 Accepted: 30 January 2021 Published: 28 February 2021 Purpose: Knowing the best alpha value from the data for each type of drug with various alpha parameters in the Double Exponential Smoothing Method and knowing the prediction results on each type of drug data at the Condong Catur Hospital pharmacy. Design/methodology/approach: Applying the Double Exponential Smoothing method with alpha parameters 0.1; 0.2; 0.3; 0.4; 0.5; 0.6; 0.7; 0.8; 0.9 Findings/result: The test results on a system built using test data show that the double exponential smoothing method provides accuracy below 20% by producing a different Alpha (α) for each type of drug because the trend patterns in each drug sale are different at the Pharmacy at the Condong Catur Hospital. . Originality/value/state of the art: Based on previous research, this study has similar characteristics such as themes, parameters and methods used. Previous researchers used smoothing methods such as Double Exponential Smoothing in predicting stock / sales of goods","PeriodicalId":31716,"journal":{"name":"Telematika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-03-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77477467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Telematika
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1