首页 > 最新文献

JOIN Jurnal Online Informatika最新文献

英文 中文
Determinant Factors in the Implementation of Information Technology Strategic Management to Academicians' Performance in Higher Education Institution 高校实施信息技术战略管理对院士绩效的决定因素
Pub Date : 2021-12-26 DOI: 10.15575/join.v6i2.829
C. Slamet, Aedah Binti Abdul Rahman, M. Ramdhani
This study aimed to understand the determinant factors of information technology (IT) strategic management to individual (lecturer) performance using data samples from selected higher education institutions in Indonesia. Since the use of IT innovation in (HEI) is often considered a lens representing the strength of strategy, competitiveness, and quality within a corporate view, it is vague on its impact on individual performance. The investigation included data collection based on an online survey conducted on 325 respondents to investigate the relationship between strategic factors, elaborated into several relevant criteria. The results of statistical data processing showed that of all the strategic factors involved, the business model and strategic alignment categorized in high determinations in influencing academicians' performance at HEI.
本研究旨在了解信息技术(IT)战略管理对个人(讲师)绩效的决定因素,使用来自印度尼西亚选定的高等教育机构的数据样本。由于在(HEI)中使用IT创新通常被认为是代表公司观点中战略、竞争力和质量的力量的一个镜头,因此它对个人绩效的影响是模糊的。该调查包括基于325名受访者进行的在线调查的数据收集,以调查战略因素之间的关系,并详细阐述了几个相关标准。统计数据处理结果显示,在所有涉及的战略因素中,商业模式和战略一致性对高等学校院士绩效的影响具有高决定性。
{"title":"Determinant Factors in the Implementation of Information Technology Strategic Management to Academicians' Performance in Higher Education Institution","authors":"C. Slamet, Aedah Binti Abdul Rahman, M. Ramdhani","doi":"10.15575/join.v6i2.829","DOIUrl":"https://doi.org/10.15575/join.v6i2.829","url":null,"abstract":"This study aimed to understand the determinant factors of information technology (IT) strategic management to individual (lecturer) performance using data samples from selected higher education institutions in Indonesia. Since the use of IT innovation in (HEI) is often considered a lens representing the strength of strategy, competitiveness, and quality within a corporate view, it is vague on its impact on individual performance. The investigation included data collection based on an online survey conducted on 325 respondents to investigate the relationship between strategic factors, elaborated into several relevant criteria. The results of statistical data processing showed that of all the strategic factors involved, the business model and strategic alignment categorized in high determinations in influencing academicians' performance at HEI.","PeriodicalId":32019,"journal":{"name":"JOIN Jurnal Online Informatika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84830385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Enhancement of White Blood Cells Images using Shock Filtering Equation for Classification Problem 用冲击滤波方程增强白细胞图像的分类问题
Pub Date : 2021-12-26 DOI: 10.15575/join.v6i2.739
Gregorius Vito, P. H. Gunawan
Medical image processing has developed rapidly in the last decade. The autodetection and classification of white blood cells (WBC) is one of the medical image processing applications. The analysis of WBC images has engaged researchers from medical also technology fields. Since WBC detection plays an essential role in the medical field, this paper presents a system for distinguishing and classifying WBC types: eosinophils, neutrophils, lymphocytes, and monocytes, using K-Nearest Neighbor (K-NN) and Logistic Regression (LR). This study aims to find the best accuracy of pre-processing images using original grayscale, shock filtering, and thresholding grayscale. The highest average accuracy in classifying WBC images in the conducting research is 43.54% using the LR algorithm from 2103 images. It is obtained from the combination of thresholding grayscale image and shock filtering equation to enhance the quality of an image. Overall, using two algorithms, KNN and LR, the classification accuracy can increase up to 12%.
近十年来,医学图像处理技术发展迅速。白细胞的自动检测与分类是医学图像处理的应用之一。白细胞图像的分析已经吸引了医学和技术领域的研究人员。鉴于白细胞检测在医学领域发挥着至关重要的作用,本文提出了一种基于k -最近邻(K-NN)和Logistic回归(LR)的白细胞类型区分和分类系统:嗜酸性粒细胞、中性粒细胞、淋巴细胞和单核细胞。本研究的目的是通过原始灰度、冲击滤波和阈值灰度来寻找预处理图像的最佳精度。在进行的研究中,使用LR算法对2103幅WBC图像进行分类的平均准确率最高,为43.54%。它是将阈值灰度图像与冲击滤波方程相结合来提高图像质量的方法。总的来说,使用KNN和LR两种算法,分类精度可以提高12%。
{"title":"Enhancement of White Blood Cells Images using Shock Filtering Equation for Classification Problem","authors":"Gregorius Vito, P. H. Gunawan","doi":"10.15575/join.v6i2.739","DOIUrl":"https://doi.org/10.15575/join.v6i2.739","url":null,"abstract":"Medical image processing has developed rapidly in the last decade. The autodetection and classification of white blood cells (WBC) is one of the medical image processing applications. The analysis of WBC images has engaged researchers from medical also technology fields. Since WBC detection plays an essential role in the medical field, this paper presents a system for distinguishing and classifying WBC types: eosinophils, neutrophils, lymphocytes, and monocytes, using K-Nearest Neighbor (K-NN) and Logistic Regression (LR). This study aims to find the best accuracy of pre-processing images using original grayscale, shock filtering, and thresholding grayscale. The highest average accuracy in classifying WBC images in the conducting research is 43.54% using the LR algorithm from 2103 images. It is obtained from the combination of thresholding grayscale image and shock filtering equation to enhance the quality of an image. Overall, using two algorithms, KNN and LR, the classification accuracy can increase up to 12%.","PeriodicalId":32019,"journal":{"name":"JOIN Jurnal Online Informatika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87537873","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Sundanese Stemming using Syllable Pattern 利用音节模式的Sundanese词干
Pub Date : 2021-12-26 DOI: 10.15575/join.v6i2.812
A. Sutedi, Rickard Elsen, M. Nasrulloh
Stemming is a technique to return the word derivation to the root or base word. Stemming is widely used for data processing such as searching word indexes, translating, and information retrieval from a document in the database. In general, stemming uses a morphological pattern from a derived word to produce the original word or root word.  In the previous research, this technique faced over-stemming and under-stemming problems. In this study, the stemming process will be improved by the syllable pattern (canonical) based on the phonological rule in Sundanese. The stemming result for syllable patterns gets an accuracy of 89% and the execution of the test data resulted in 95% from all the basic words. This simple algorithm has the advantage of being able to adjust the position of the syllable pattern with the word to be stemmed. Due to some data shortage constraints (typo, loan-word, non-deterministic word with syllable pattern), we can improve to increase the accuracy such as adjusting words and adding reference dictionaries. In addition, this algorithm has a drawback that causes the execution to be over-stemming.
词干提取是一种将词源返回到词根或基词的技术。词干提取广泛用于数据处理,如搜索词索引、翻译和从数据库中的文档检索信息。一般来说,词干提取利用派生词的形态模式来产生原词或词根。在以往的研究中,该技术面临过干和欠干的问题。本研究将以巽他语语音规则为基础,通过音节模式(规范)来改进词干提取过程。音节模式的词干提取结果准确率达到89%,测试数据的执行准确率达到95%。这个简单的算法的优点是能够调整音节模式与要词干的位置。由于一些数据短缺的限制(打字错误、外来词、音节模式不确定词),我们可以通过调整单词和添加参考字典等方法来提高准确性。此外,该算法有一个缺点,即导致执行过度词干。
{"title":"Sundanese Stemming using Syllable Pattern","authors":"A. Sutedi, Rickard Elsen, M. Nasrulloh","doi":"10.15575/join.v6i2.812","DOIUrl":"https://doi.org/10.15575/join.v6i2.812","url":null,"abstract":"Stemming is a technique to return the word derivation to the root or base word. Stemming is widely used for data processing such as searching word indexes, translating, and information retrieval from a document in the database. In general, stemming uses a morphological pattern from a derived word to produce the original word or root word.  In the previous research, this technique faced over-stemming and under-stemming problems. In this study, the stemming process will be improved by the syllable pattern (canonical) based on the phonological rule in Sundanese. The stemming result for syllable patterns gets an accuracy of 89% and the execution of the test data resulted in 95% from all the basic words. This simple algorithm has the advantage of being able to adjust the position of the syllable pattern with the word to be stemmed. Due to some data shortage constraints (typo, loan-word, non-deterministic word with syllable pattern), we can improve to increase the accuracy such as adjusting words and adding reference dictionaries. In addition, this algorithm has a drawback that causes the execution to be over-stemming.","PeriodicalId":32019,"journal":{"name":"JOIN Jurnal Online Informatika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89214449","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Comparison of Audio Analysis Using Audio Forensic Technique and Mel Frequency Cepstral Coefficient Method (MFCC) as the Requirement of Digital Evidence 基于数字证据要求的音频取证技术与低频倒谱系数法的音频分析比较
Pub Date : 2021-12-26 DOI: 10.15575/join.v6i2.702
Helmy Dzulfikar, S. Adinandra, E. Ramadhani
Audio forensics is the application of science and scientific methods in handling digital evidence in the form of audio. In this regard, the audio supports the disclosure of various criminal cases and reveals the necessary information needed in the trial process. So far, research related to audio forensics is more on human voices that are recorded directly, either by using a voice recorder or voice recordings on smartphones, which are available on Google Play services or iOS Store. This study compares the analysis of live voices (human voices) with artificial voices on Google Voice and other artificial voices. This study implements the audio forensic analysis, which involves pitch, formant, and spectrogram as the parameters. Besides, it also analyses the data by using feature extraction using the Mel Frequency Cepstral Coefficient (MFCC) method, the Dynamic Time Warping (DTW) method, and applying the K-Nearest Neighbor (KNN) algorithm. The previously made live voice recording and artificial voice are then cut into words. Then, it tests the chunk from the voice recording. The testing of audio forensic techniques with the Praat application obtained similar words between live and artificial voices and provided 40,74% accuracy of information. While the testing by using the MFCC, DTW, KNN methods with the built systems by using Matlab, obtained similar word information between live voice and artificial voice with an accuracy of 33.33%.
音频取证是应用科学和科学方法处理以音频形式存在的数字证据。在这方面,音频支持了各种刑事案件的公开,揭示了审判过程中需要的必要信息。到目前为止,与音频取证相关的研究更多的是直接记录人类的声音,无论是使用录音机还是智能手机上的录音,这些都可以在Google Play服务或iOS Store上获得。该研究对比了谷歌语音(Google Voice)和其他人工语音对真人声音(人类声音)的分析。本研究以音高、共振峰、声谱图为参数,进行音频取证分析。此外,还采用了Mel频率倒谱系数(MFCC)方法、动态时间翘曲(DTW)方法和k -最近邻(KNN)算法对数据进行特征提取。然后将之前制作的现场录音和人工语音切成文字。然后,它测试来自录音的数据块。使用Praat应用程序对音频取证技术进行测试,获得了真人和人工声音之间相似的单词,并提供了40.74%的信息准确性。利用Matlab搭建的系统,采用MFCC、DTW、KNN等方法进行测试,获得了现场语音与人工语音相近的单词信息,准确率达到33.33%。
{"title":"The Comparison of Audio Analysis Using Audio Forensic Technique and Mel Frequency Cepstral Coefficient Method (MFCC) as the Requirement of Digital Evidence","authors":"Helmy Dzulfikar, S. Adinandra, E. Ramadhani","doi":"10.15575/join.v6i2.702","DOIUrl":"https://doi.org/10.15575/join.v6i2.702","url":null,"abstract":"Audio forensics is the application of science and scientific methods in handling digital evidence in the form of audio. In this regard, the audio supports the disclosure of various criminal cases and reveals the necessary information needed in the trial process. So far, research related to audio forensics is more on human voices that are recorded directly, either by using a voice recorder or voice recordings on smartphones, which are available on Google Play services or iOS Store. This study compares the analysis of live voices (human voices) with artificial voices on Google Voice and other artificial voices. This study implements the audio forensic analysis, which involves pitch, formant, and spectrogram as the parameters. Besides, it also analyses the data by using feature extraction using the Mel Frequency Cepstral Coefficient (MFCC) method, the Dynamic Time Warping (DTW) method, and applying the K-Nearest Neighbor (KNN) algorithm. The previously made live voice recording and artificial voice are then cut into words. Then, it tests the chunk from the voice recording. The testing of audio forensic techniques with the Praat application obtained similar words between live and artificial voices and provided 40,74% accuracy of information. While the testing by using the MFCC, DTW, KNN methods with the built systems by using Matlab, obtained similar word information between live voice and artificial voice with an accuracy of 33.33%.","PeriodicalId":32019,"journal":{"name":"JOIN Jurnal Online Informatika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82649546","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Model TangselPay Receipts Using the UTAUT 2 Method 模型TangselPay收据使用UTAUT 2方法
Pub Date : 2021-12-26 DOI: 10.15575/join.v6i2.803
Aolia Ikhwanudin, Kusrini Kusrini, Agung Budi Prasetio
The South Tangerang City Government launched a digital financial service called TangselPay. This payment instrument will function as a means of paying levies and other transactions paid by taxpayers, TangselPay is basically a service from the South Tangerang City Government which is accessed via cellular phones (cell phones/smartphones) with the main aim of providing convenience to taxpayers in making levy payments. . . , so that taxpayers do not need to pay cash to the officer. This study aims to determine what factors influence people's interest in using TangselPay services in South Tangerang. The research model used is a modified model of Unified Theory of Acceptance and Use of Technology 2 (UTAUT 2). Data collection using purposive sampling method with the number of respondents in this study as many as 116 people in our market Pamulang. The data analysis technique in this study used Structural Equation Modeling (SEM) with SmartPLS version 3.3.3 software. The results of the analysis illustrate that the variables of Performance Expectations (PE) and Facilitation Condition (FC) have a positive effect on Use behavior and interest in use have a positive effect on usage behavior. While the variables of business expectations, social influence and hedonic motivation do not have a direct effect.
南Tangerang市政府推出了一项名为TangselPay的数字金融服务。这种支付工具将作为纳税人支付税款和其他交易的手段,TangselPay基本上是南Tangerang市政府提供的一项服务,通过手机(手机/智能手机)访问,主要目的是为纳税人提供方便,使征费支付…,这样纳税人就不需要向税务官员支付现金。本研究旨在确定哪些因素影响人们在南Tangerang使用TangselPay服务的兴趣。使用的研究模型是对技术接受与使用统一理论2 (UTAUT 2)的修正模型。数据收集采用有目的抽样的方法,本研究的调查对象在我国帕慕朗市场多达116人。本研究的数据分析技术采用结构方程模型(SEM),使用SmartPLS 3.3.3版软件。分析结果表明,绩效期望(PE)和促进条件(FC)变量对使用行为有正向影响,使用兴趣对使用行为有正向影响。而商业期望、社会影响和享乐动机等变量则没有直接影响。
{"title":"Model TangselPay Receipts Using the UTAUT 2 Method","authors":"Aolia Ikhwanudin, Kusrini Kusrini, Agung Budi Prasetio","doi":"10.15575/join.v6i2.803","DOIUrl":"https://doi.org/10.15575/join.v6i2.803","url":null,"abstract":"The South Tangerang City Government launched a digital financial service called TangselPay. This payment instrument will function as a means of paying levies and other transactions paid by taxpayers, TangselPay is basically a service from the South Tangerang City Government which is accessed via cellular phones (cell phones/smartphones) with the main aim of providing convenience to taxpayers in making levy payments. . . , so that taxpayers do not need to pay cash to the officer. This study aims to determine what factors influence people's interest in using TangselPay services in South Tangerang. The research model used is a modified model of Unified Theory of Acceptance and Use of Technology 2 (UTAUT 2). Data collection using purposive sampling method with the number of respondents in this study as many as 116 people in our market Pamulang. The data analysis technique in this study used Structural Equation Modeling (SEM) with SmartPLS version 3.3.3 software. The results of the analysis illustrate that the variables of Performance Expectations (PE) and Facilitation Condition (FC) have a positive effect on Use behavior and interest in use have a positive effect on usage behavior. While the variables of business expectations, social influence and hedonic motivation do not have a direct effect.","PeriodicalId":32019,"journal":{"name":"JOIN Jurnal Online Informatika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84443178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Prediction Model for Soybean Land Suitability Using C5.0 Algorithm 基于C5.0算法的大豆土地适宜性预测模型
Pub Date : 2021-12-26 DOI: 10.15575/join.v6i2.711
Andi Nurkholis, Styawati Styawati
Soybean is one of the protein main sources that can be used for consumption in tempeh, tofu, milk, etc. Based on projection results, soybean production and consumption balance in Indonesia, in 2018-2022, it is estimated that deficit will increase by 6.18% per year. So, it's necessary to guide soybean land suitability, which can be carried out by evaluating existing land suitability to support soybean farming expansion and production. This study conducted an analytical study to evaluate soybean land suitability using C5.0 algorithm based on land and weather characteristics. The C5.0 algorithm is an extension of spatial decision tree, an ID3 decision tree extension. Dataset is divided into two categories: explanatory factors representing seven land characteristics (drainage, land slope, base saturation, cation exchange capacity, soil texture, soil pH, and soil mineral depth) and two weather data (rainfall and temperature), and a target class represent soybean land suitability in two study areas, namely Bogor and Grobogan Regency. The result generated two land suitability models with the best model obtained accuracy for training data 98.58%, while testing data was 97.17%. The best model rules are 69 rules that do not involve three attributes: cation exchange capacity, soil mineral depth, and rainfall.
大豆是蛋白质的主要来源之一,可用于豆豉、豆腐、牛奶等食用。根据预测结果,2018-2022年印尼大豆生产和消费平衡,预计赤字将以每年6.18%的速度增长。因此,有必要对大豆土地适宜性进行指导,可通过对现有土地适宜性进行评价,以支持大豆种植规模扩大和生产。本研究基于土地和天气特征,采用C5.0算法对大豆土地适宜性进行了分析研究。C5.0算法是空间决策树的扩展,是ID3决策树的扩展。数据集分为两类:代表7个土地特征(排水、土地坡度、基质饱和度、阳离子交换容量、土壤质地、土壤pH值和土壤矿物深度)的解释因子和2个天气数据(降雨和温度),目标类代表两个研究区(茂物和Grobogan reggency)的大豆土地适宜性。结果生成了两个土地适宜性模型,其中最佳模型对训练数据的准确率为98.58%,对测试数据的准确率为97.17%。最好的模型规则是69条不涉及三个属性的规则:阳离子交换容量、土壤矿物深度和降雨量。
{"title":"Prediction Model for Soybean Land Suitability Using C5.0 Algorithm","authors":"Andi Nurkholis, Styawati Styawati","doi":"10.15575/join.v6i2.711","DOIUrl":"https://doi.org/10.15575/join.v6i2.711","url":null,"abstract":"Soybean is one of the protein main sources that can be used for consumption in tempeh, tofu, milk, etc. Based on projection results, soybean production and consumption balance in Indonesia, in 2018-2022, it is estimated that deficit will increase by 6.18% per year. So, it's necessary to guide soybean land suitability, which can be carried out by evaluating existing land suitability to support soybean farming expansion and production. This study conducted an analytical study to evaluate soybean land suitability using C5.0 algorithm based on land and weather characteristics. The C5.0 algorithm is an extension of spatial decision tree, an ID3 decision tree extension. Dataset is divided into two categories: explanatory factors representing seven land characteristics (drainage, land slope, base saturation, cation exchange capacity, soil texture, soil pH, and soil mineral depth) and two weather data (rainfall and temperature), and a target class represent soybean land suitability in two study areas, namely Bogor and Grobogan Regency. The result generated two land suitability models with the best model obtained accuracy for training data 98.58%, while testing data was 97.17%. The best model rules are 69 rules that do not involve three attributes: cation exchange capacity, soil mineral depth, and rainfall.","PeriodicalId":32019,"journal":{"name":"JOIN Jurnal Online Informatika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74459245","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Application of VGG Architecture to Detect Korean Syllables Based on Image Text VGG体系结构在图像文本韩语音节检测中的应用
Pub Date : 2021-12-26 DOI: 10.15575/join.v6i2.653
Irma Amelia Dewi, Amelia Shaneva
Korean culture began to spread widely throughout the world, ranging from lifestyle, music, food, and drinks, and there are still many exciting things from this Korean culture. One of the interesting things to learn is to know Korean letters (Hangul), which are non-Latin characters. If the Hangul letters have been learned, the next thing that lay people must learn is the Korean syllables, which are different from the Indonesian syllables. Because of the difficulty of learning Korean syllables, understanding a sentence needed a system to recognize Korean syllables. Therefore, in this study designing a system to acknowledge Korean syllables, the method used is Convolutional Neural Network with VGG architecture. The system performs the process of detecting Korean syllables based on models that have been trained using 72 syllable classes. The tests on 72 Korean syllable classes obtain an average accuracy of 96%, an average precision value of 96%, an average recall value of 100%, and an average F1 score of 98%.
韩国文化开始在世界范围内广泛传播,从生活方式、音乐、饮食到饮料,韩国文化中仍然有许多令人兴奋的东西。学习非拉丁字母的韩文是一件有趣的事情。如果已经学会了韩文字母,那么外行人接下来要学习的是与印尼语音节不同的韩国语音节。由于学习韩语音节的困难,理解一个句子需要一个识别韩语音节的系统。因此,在本研究设计的韩语音节识别系统中,使用的方法是具有VGG架构的卷积神经网络。该系统以使用72个音节分类训练的模型为基础,进行韩语音节检测过程。在72个韩语音节类的测试中,平均准确率为96%,平均准确率为96%,平均查全率为100%,平均F1分数为98%。
{"title":"Application of VGG Architecture to Detect Korean Syllables Based on Image Text","authors":"Irma Amelia Dewi, Amelia Shaneva","doi":"10.15575/join.v6i2.653","DOIUrl":"https://doi.org/10.15575/join.v6i2.653","url":null,"abstract":"Korean culture began to spread widely throughout the world, ranging from lifestyle, music, food, and drinks, and there are still many exciting things from this Korean culture. One of the interesting things to learn is to know Korean letters (Hangul), which are non-Latin characters. If the Hangul letters have been learned, the next thing that lay people must learn is the Korean syllables, which are different from the Indonesian syllables. Because of the difficulty of learning Korean syllables, understanding a sentence needed a system to recognize Korean syllables. Therefore, in this study designing a system to acknowledge Korean syllables, the method used is Convolutional Neural Network with VGG architecture. The system performs the process of detecting Korean syllables based on models that have been trained using 72 syllable classes. The tests on 72 Korean syllable classes obtain an average accuracy of 96%, an average precision value of 96%, an average recall value of 100%, and an average F1 score of 98%.","PeriodicalId":32019,"journal":{"name":"JOIN Jurnal Online Informatika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77354037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of Machine Learning Classification Methods in Hepatitis C Virus 丙型肝炎病毒机器学习分类方法的比较
Pub Date : 2021-06-17 DOI: 10.15575/JOIN.V6I1.719
L. Syafaah, Z. Zulfatman, I. Pakaya, Merinda Lestandy
The hepatitis C virus (HCV) is considered a problem to the health of societies are the main. There are around 120-130 million or 3% of the world's total population infected with HCV. Without treatment, most major infectious acute evolve into chronic, followed by diseases liver, such as cirrhosis and cancer liver. The data parameters used in this study included albumin (ALB), bilirubin (BIL), choline esterase (CHE), -glutamyl-transferase (GGT), aspartate amino-transferase (AST), alanine amino-transferase (ALT), cholesterol (CHOL), creatinine (CREA), protein (PROT), and Alkaline phosphatase (ALP). This research proposes a methodology based on machine learning classification methods including k-nearest neighbors, naïve Bayes, neural network, and random forest. The aim of this study is to assess and evaluate the level of accuracy using the algorithm classification machine learning to detect the disease HCV. The result show that the accuracy of the method NN has a value of accuracy are high, namely at 95.12% compared to the method KNN, naïve Bayes and RF in a row amounted to 89.43%, 90.24%, and 94.31%.
丙型肝炎病毒(HCV)被认为是危害社会健康的主要问题。约有1.2亿至1.3亿人感染丙型肝炎病毒,占世界总人口的3%。如果不进行治疗,大多数主要的急性感染性疾病会演变成慢性,随后是肝脏疾病,如肝硬化和肝癌。本研究使用的数据参数包括白蛋白(ALB)、胆红素(BIL)、胆碱酯酶(CHE)、-谷氨酰基转移酶(GGT)、天冬氨酸氨基转移酶(AST)、丙氨酸氨基转移酶(ALT)、胆固醇(CHOL)、肌酐(CREA)、蛋白质(PROT)和碱性磷酸酶(ALP)。本研究提出了一种基于机器学习分类方法的方法,包括k近邻、naïve贝叶斯、神经网络和随机森林。本研究的目的是评估和评估使用算法分类机器学习检测HCV疾病的准确性水平。结果表明,与KNN方法相比,NN方法具有较高的准确率值,即95.12%,naïve贝叶斯和RF连续分别达到89.43%、90.24%和94.31%。
{"title":"Comparison of Machine Learning Classification Methods in Hepatitis C Virus","authors":"L. Syafaah, Z. Zulfatman, I. Pakaya, Merinda Lestandy","doi":"10.15575/JOIN.V6I1.719","DOIUrl":"https://doi.org/10.15575/JOIN.V6I1.719","url":null,"abstract":"The hepatitis C virus (HCV) is considered a problem to the health of societies are the main. There are around 120-130 million or 3% of the world's total population infected with HCV. Without treatment, most major infectious acute evolve into chronic, followed by diseases liver, such as cirrhosis and cancer liver. The data parameters used in this study included albumin (ALB), bilirubin (BIL), choline esterase (CHE), -glutamyl-transferase (GGT), aspartate amino-transferase (AST), alanine amino-transferase (ALT), cholesterol (CHOL), creatinine (CREA), protein (PROT), and Alkaline phosphatase (ALP). This research proposes a methodology based on machine learning classification methods including k-nearest neighbors, naïve Bayes, neural network, and random forest. The aim of this study is to assess and evaluate the level of accuracy using the algorithm classification machine learning to detect the disease HCV. The result show that the accuracy of the method NN has a value of accuracy are high, namely at 95.12% compared to the method KNN, naïve Bayes and RF in a row amounted to 89.43%, 90.24%, and 94.31%.","PeriodicalId":32019,"journal":{"name":"JOIN Jurnal Online Informatika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79137208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Location Selection Query in Google Maps using Voronoi-based Spatial Skyline (VS2) Algorithm 基于voronoi的空间天际线(VS2)算法的谷歌地图位置选择查询
Pub Date : 2021-06-17 DOI: 10.15575/join.v6i1.667
A. Annisa, Leni Angraeni
Google Maps is one of the popular location selection systems. One of the popular features of Google Maps is nearby search. For example, someone who wants to find the closest restaurants to his location can use the nearby search feature. This feature only considers one specific location in providing the desired place choice. In a real-world situation, there may be a need to consider more than one location in selecting the desired place. Assume someone would like to choose a hotel close to the conference hall, the museum, beach, and souvenir store. In this situation, nearby search feature in Google Maps may not be able to suggest a list of hotels that are interesting for him based on the distance from each destination places. In this paper, we have successfully developed a web-based application of Google Maps search using Voronoi-based Spatial Skyline (VS2) algorithm to choose some Point Of Interest (POI) from Google Maps as their considered locations to select desired place. We used Google Maps API to provide POI information for our web-based application. The experiment result showed that the execution time increases while the number of considered location increases.
谷歌地图是一种流行的位置选择系统。谷歌地图最受欢迎的功能之一是附近搜索。例如,想要找到离他最近的餐馆的人可以使用附近搜索功能。此功能只考虑一个特定的位置,以提供所需的位置选择。在现实世界中,在选择所需地点时可能需要考虑多个位置。假设有人想选择一个靠近会议厅、博物馆、海滩和纪念品商店的酒店。在这种情况下,谷歌地图的附近搜索功能可能无法根据每个目的地的距离为他提供感兴趣的酒店列表。在本文中,我们成功开发了一个基于web的谷歌地图搜索应用程序,使用基于voronoi的空间天际线(VS2)算法从谷歌地图中选择一些兴趣点(POI)作为他们考虑的位置来选择所需的地方。我们使用Google Maps API为基于web的应用程序提供POI信息。实验结果表明,随着考虑位置数量的增加,执行时间增加。
{"title":"Location Selection Query in Google Maps using Voronoi-based Spatial Skyline (VS2) Algorithm","authors":"A. Annisa, Leni Angraeni","doi":"10.15575/join.v6i1.667","DOIUrl":"https://doi.org/10.15575/join.v6i1.667","url":null,"abstract":"Google Maps is one of the popular location selection systems. One of the popular features of Google Maps is nearby search. For example, someone who wants to find the closest restaurants to his location can use the nearby search feature. This feature only considers one specific location in providing the desired place choice. In a real-world situation, there may be a need to consider more than one location in selecting the desired place. Assume someone would like to choose a hotel close to the conference hall, the museum, beach, and souvenir store. In this situation, nearby search feature in Google Maps may not be able to suggest a list of hotels that are interesting for him based on the distance from each destination places. In this paper, we have successfully developed a web-based application of Google Maps search using Voronoi-based Spatial Skyline (VS2) algorithm to choose some Point Of Interest (POI) from Google Maps as their considered locations to select desired place. We used Google Maps API to provide POI information for our web-based application. The experiment result showed that the execution time increases while the number of considered location increases.","PeriodicalId":32019,"journal":{"name":"JOIN Jurnal Online Informatika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86776011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Discovering Computer Science Research Topic Trends using Latent Dirichlet Allocation 利用潜在狄利克雷分配发现计算机科学研究主题趋势
Pub Date : 2021-06-17 DOI: 10.15575/join.v6i1.636
Kartika Rizqi Nastiti, A. Hidayatullah, A. R. Pratama
Before conducting a research project, researchers must find the trends and state of the art in their research field. However, that is not necessarily an easy job for researchers, partly due to the lack of specific tools to filter the required information by time range. This study aims to provide a solution to that problem by performing a topic modeling approach to the scraped data from Google Scholar between 2010 and 2019. We utilized Latent Dirichlet Allocation (LDA) combined with Term Frequency-Indexed Document Frequency (TF-IDF) to build topic models and employed the coherence score method to determine how many different topics there are for each year’s data. We also provided a visualization of the topic interpretation and word distribution for each topic as well as its relevance using word cloud and PyLDAvis. In the future, we expect to add more features to show the relevance and interconnections between each topic to make it even easier for researchers to use this tool in their research projects.
在进行研究项目之前,研究人员必须找到他们研究领域的趋势和最新技术。然而,这对研究人员来说并不一定是一件容易的工作,部分原因是缺乏按时间范围过滤所需信息的特定工具。本研究旨在通过对2010年至2019年从谷歌学术研究中抓取的数据执行主题建模方法,为该问题提供解决方案。我们利用潜狄利let分配(LDA)结合词频索引文档频率(TF-IDF)建立主题模型,并采用相干评分法确定每年数据有多少个不同的主题。我们还使用词云和PyLDAvis为每个主题提供了主题解释和单词分布的可视化,以及它的相关性。在未来,我们希望增加更多的功能来显示每个主题之间的相关性和相互联系,使研究人员更容易在他们的研究项目中使用这个工具。
{"title":"Discovering Computer Science Research Topic Trends using Latent Dirichlet Allocation","authors":"Kartika Rizqi Nastiti, A. Hidayatullah, A. R. Pratama","doi":"10.15575/join.v6i1.636","DOIUrl":"https://doi.org/10.15575/join.v6i1.636","url":null,"abstract":"Before conducting a research project, researchers must find the trends and state of the art in their research field. However, that is not necessarily an easy job for researchers, partly due to the lack of specific tools to filter the required information by time range. This study aims to provide a solution to that problem by performing a topic modeling approach to the scraped data from Google Scholar between 2010 and 2019. We utilized Latent Dirichlet Allocation (LDA) combined with Term Frequency-Indexed Document Frequency (TF-IDF) to build topic models and employed the coherence score method to determine how many different topics there are for each year’s data. We also provided a visualization of the topic interpretation and word distribution for each topic as well as its relevance using word cloud and PyLDAvis. In the future, we expect to add more features to show the relevance and interconnections between each topic to make it even easier for researchers to use this tool in their research projects.","PeriodicalId":32019,"journal":{"name":"JOIN Jurnal Online Informatika","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76629874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
JOIN Jurnal Online Informatika
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1