首页 > 最新文献

JOIV International Journal on Informatics Visualization最新文献

英文 中文
Optimization of Vehicle Object Detection Based on UAV Dataset: CNN Model and Darknet Algorithm 基于无人机数据集的车辆目标检测优化:CNN模型和Darknet算法
Q3 Decision Sciences Pub Date : 2023-05-08 DOI: 10.30630/joiv.7.1.1159
A. H. Rangkuti, Varyl Hasbi Athala
This study was conducted to identify several types of vehicles taken using drone technology or Unmanned Aerial Vehicles (UAV). The introduction of vehicles from above an altitude of more than 300-400 meters that pass the highway above ground level becomes a problem that needs optimum investigation so that there are no errors in determining the type of vehicle. This study was conducted at mining sites to identify the class of vehicles that pass through the highway and how many types of vehicles pass through the road for vehicle recognition using a deep learning algorithm using several CNN models such as Yolo V4, Yolo V3, Densenet 201, CsResNext –Panet 50 and supported by the Darknet algorithm to support the training process. In this study, several experiments were carried out with other CNN models, but with peripherals and hardware devices, only 4 CNN models resulted in optimal accuracy. Based on the experimental results, the CSResNext-Panet 50 model has the highest accuracy and can detect 100% of the captured UAV video data, including the number of detected vehicle volumes, then Densenet and Yolo V4, which can detect up to 98% - 99%. This research needs to continue to be developed by knowing all classes affordable by UAV technology but must be supported by hardware and peripheral technology to support the training process.
这项研究是为了确定几种类型的车辆采用无人机技术或无人驾驶飞行器(UAV)。从300 ~ 400米以上的高空引进车辆通过地面以上的高速公路,是一个需要进行最佳调查的问题,以便在确定车辆类型时不会出现错误。本研究在采矿现场进行,使用深度学习算法识别通过高速公路的车辆类别以及通过道路的车辆类型,该算法使用多个CNN模型(如Yolo V4, Yolo V3, Densenet 201, CsResNext -Panet 50),并由Darknet算法支持,以支持训练过程。在本研究中,使用其他CNN模型进行了多次实验,但在外设和硬件设备的情况下,只有4个CNN模型达到了最佳精度。从实验结果来看,CSResNext-Panet 50模型的准确率最高,可以检测到捕获的无人机视频数据的100%,包括检测到的车辆数量,其次是Densenet和Yolo V4,可以检测到98% - 99%。这项研究需要通过了解无人机技术负担得起的所有类别来继续发展,但必须得到硬件和外围技术的支持,以支持培训过程。
{"title":"Optimization of Vehicle Object Detection Based on UAV Dataset: CNN Model and Darknet Algorithm","authors":"A. H. Rangkuti, Varyl Hasbi Athala","doi":"10.30630/joiv.7.1.1159","DOIUrl":"https://doi.org/10.30630/joiv.7.1.1159","url":null,"abstract":"This study was conducted to identify several types of vehicles taken using drone technology or Unmanned Aerial Vehicles (UAV). The introduction of vehicles from above an altitude of more than 300-400 meters that pass the highway above ground level becomes a problem that needs optimum investigation so that there are no errors in determining the type of vehicle. This study was conducted at mining sites to identify the class of vehicles that pass through the highway and how many types of vehicles pass through the road for vehicle recognition using a deep learning algorithm using several CNN models such as Yolo V4, Yolo V3, Densenet 201, CsResNext –Panet 50 and supported by the Darknet algorithm to support the training process. In this study, several experiments were carried out with other CNN models, but with peripherals and hardware devices, only 4 CNN models resulted in optimal accuracy. Based on the experimental results, the CSResNext-Panet 50 model has the highest accuracy and can detect 100% of the captured UAV video data, including the number of detected vehicle volumes, then Densenet and Yolo V4, which can detect up to 98% - 99%. This research needs to continue to be developed by knowing all classes affordable by UAV technology but must be supported by hardware and peripheral technology to support the training process.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80970466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Automatic Summarization of Court Decision Documents over Narcotic Cases Using BERT 基于BERT的毒品案件裁判文书自动摘要
Q3 Decision Sciences Pub Date : 2023-05-08 DOI: 10.30630/joiv.7.2.1811
G. Wicaksono, Sheila Fitria Al asqalani, Yufis Azhar, N. Hidayah, Andreawana Andreawana
Reviewing court decision documents for references in handling similar cases can be time-consuming. From this perspective, we need a system that can allow the summarization of court decision documents to enable adequate information extraction. This study used 50 court decision documents taken from the official website of the Supreme Court of the Republic of Indonesia, with the cases raised being Narcotics and Psychotropics. The court decision document dataset was divided into two types, court decision documents with the identity of the defendant and court decision documents without the defendant's identity. We used BERT specific to the IndoBERT model to summarize the court decision documents. This study uses four types of IndoBert models: IndoBERT-Base-Phase 1, IndoBERT-Lite-Bas-Phase 1, IndoBERT-Large-Phase 1, and IndoBERT-Lite-Large-Phase 1. This study also uses three types of ratios and ROUGE-N in summarizing court decision documents consisting of ratios of 20%, 30%, and 40% ratios, as well as ROUGE1, ROUGE2, and ROUGE3. The results have found that IndoBERT pre-trained model had a better performance in summarizing court decision documents with or without the defendant's identity with a 40% summarizing ratio. The highest ROUGE score produced by IndoBERT was found in the INDOBERT-LITE-BASE PHASE 1 model with a ROUGE value of 1.00 for documents with the defendant's identity and 0.970 for documents without the defendant's identity at a ratio of 40% in R-1. For future research, it is expected to be able to use other types of Bert models such as IndoBERT Phase-2, LegalBert, etc.
在处理类似案件时,查阅法庭判决书作为参考可能会耗费大量时间。从这个角度来看,我们需要一个能够对法院判决文件进行摘要的系统,以便充分提取信息。本研究使用了印度尼西亚共和国最高法院官方网站上的50份法院判决文件,提出的案件是麻醉品和精神药物。将判决书数据集分为被告身份判决书和不含被告身份判决书两类。我们使用特定于IndoBERT模型的BERT来总结法院判决文件。本研究使用了四种IndoBert模型:IndoBert - base - phase 1、IndoBert - lite - base - phase 1、IndoBert - large - phase 1和IndoBert - lite - large - phase 1。本研究还使用了三种类型的比率和ROUGE-N,分别由20%、30%、40%的比率以及ROUGE1、ROUGE2、ROUGE3组成。结果发现,IndoBERT预训练模型在总结有或没有被告身份的法院判决文件方面表现更好,总结率为40%。IndoBERT产生的ROUGE得分最高的是IndoBERT - lite - base PHASE 1模型,具有被告身份的文件的ROUGE值为1.00,不具有被告身份的文件的ROUGE值为0.970,R-1的比例为40%。对于未来的研究,预计能够使用其他类型的Bert模型,如IndoBERT Phase-2、LegalBert等。
{"title":"Automatic Summarization of Court Decision Documents over Narcotic Cases Using BERT","authors":"G. Wicaksono, Sheila Fitria Al asqalani, Yufis Azhar, N. Hidayah, Andreawana Andreawana","doi":"10.30630/joiv.7.2.1811","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1811","url":null,"abstract":"Reviewing court decision documents for references in handling similar cases can be time-consuming. From this perspective, we need a system that can allow the summarization of court decision documents to enable adequate information extraction. This study used 50 court decision documents taken from the official website of the Supreme Court of the Republic of Indonesia, with the cases raised being Narcotics and Psychotropics. The court decision document dataset was divided into two types, court decision documents with the identity of the defendant and court decision documents without the defendant's identity. We used BERT specific to the IndoBERT model to summarize the court decision documents. This study uses four types of IndoBert models: IndoBERT-Base-Phase 1, IndoBERT-Lite-Bas-Phase 1, IndoBERT-Large-Phase 1, and IndoBERT-Lite-Large-Phase 1. This study also uses three types of ratios and ROUGE-N in summarizing court decision documents consisting of ratios of 20%, 30%, and 40% ratios, as well as ROUGE1, ROUGE2, and ROUGE3. The results have found that IndoBERT pre-trained model had a better performance in summarizing court decision documents with or without the defendant's identity with a 40% summarizing ratio. The highest ROUGE score produced by IndoBERT was found in the INDOBERT-LITE-BASE PHASE 1 model with a ROUGE value of 1.00 for documents with the defendant's identity and 0.970 for documents without the defendant's identity at a ratio of 40% in R-1. For future research, it is expected to be able to use other types of Bert models such as IndoBERT Phase-2, LegalBert, etc.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"76 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83857789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Hemp-Alumina Composite Radar Absorption Reflection Loss Classification 大麻-氧化铝复合材料雷达吸收反射损耗分类
Q3 Decision Sciences Pub Date : 2023-05-08 DOI: 10.30630/joiv.7.2.1169
Muhlasah Novitasari Mara, Budi Basuki Subagio, Efrilia M. Khusna, Bagus Satrio Utomo
The Radar Absorption Material (RAM) method is a coating for reducing the energy of electromagnetic waves received by converting the electromagnetic waves emitted by radar into heat energy. Hemp has been studied to have the strongest and most stable tensile characteristics of 5.5 g/den and has higher heat resistance compared to other natural fibers. Combining the characteristics of hemp with alumina powder (Al2O3) and epoxy resin could provide a stealth technology system that is able to absorb radar waves more optimally, considering that alumina has light, anti-rust and conductive properties. The electromagnetic properties of absorbent coatings can be predicted using machine learning.  This study classifies the reflection loss of Hemp-Alumina Composite using Random Forest, ANN, KNN, Logistic Regression, and Decision Tree. These machine learning classifiers are able to generate predictions immediately and can learn critical spectral properties across a wide energy range without the influence of data human bias. The frequency range of 2-12 GHz was used for the measurements.  Hemp-Alumina composite has result that the most effective structure thickness is 5mm, used as a RAM with optimum absorption in S-Band frequencies of -15,158 dB, C-Band of -16,398 dB and X-Band of -23,135 dB. The highest and optimum reflection loss value is found in the X-Band frequency with a thickness of 5mm which is equal to -23.135 dB with an absorption bandwidth of 1000 MHz and efficiencyof 93.1%. From this result, it is proven that Hemp-Alumina Composite is very effective to be used as a RAM on X-Band frequency.  Based on the results of the experiments, the Random Forest Classifier has the highest values of accuracy (0.97) and F1 score (0.98). The F1 score and accuracy of Random Forest are 0.96 and 0.97, respectively, and do not significantly differ from KNN. 
雷达吸收材料(RAM)方法是将雷达发射的电磁波转换成热能,从而降低接收到的电磁波能量的涂层。经研究,大麻具有5.5 g/den的最强和最稳定的拉伸特性,并且与其他天然纤维相比具有更高的耐热性。结合大麻与氧化铝粉末(Al2O3)和环氧树脂的特性,考虑到氧化铝具有轻质、防锈和导电的特性,可以提供一种能够更优化地吸收雷达波的隐身技术系统。利用机器学习可以预测吸收涂层的电磁特性。本研究使用随机森林、人工神经网络、KNN、逻辑回归和决策树对大麻-氧化铝复合材料的反射损失进行分类。这些机器学习分类器能够立即生成预测,并且可以在不受数据人为偏差影响的情况下,在很宽的能量范围内学习关键的光谱特性。测量的频率范围为2- 12ghz。结果表明,大麻-氧化铝复合材料最有效的结构厚度为5mm,用作RAM时,s波段的吸收频率为-15,158 dB, c波段的吸收频率为-16,398 dB, x波段的吸收频率为-23,135 dB。在厚度为5mm的x波段频率处反射损耗值最高,为-23.135 dB,吸收带宽为1000 MHz,效率为93.1%。结果表明,大麻-氧化铝复合材料在x波段上作为RAM是非常有效的。从实验结果来看,随机森林分类器的准确率最高(0.97),F1得分最高(0.98)。随机森林的F1得分和准确率分别为0.96和0.97,与KNN没有显著差异。
{"title":"Hemp-Alumina Composite Radar Absorption Reflection Loss Classification","authors":"Muhlasah Novitasari Mara, Budi Basuki Subagio, Efrilia M. Khusna, Bagus Satrio Utomo","doi":"10.30630/joiv.7.2.1169","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1169","url":null,"abstract":"The Radar Absorption Material (RAM) method is a coating for reducing the energy of electromagnetic waves received by converting the electromagnetic waves emitted by radar into heat energy. Hemp has been studied to have the strongest and most stable tensile characteristics of 5.5 g/den and has higher heat resistance compared to other natural fibers. Combining the characteristics of hemp with alumina powder (Al2O3) and epoxy resin could provide a stealth technology system that is able to absorb radar waves more optimally, considering that alumina has light, anti-rust and conductive properties. The electromagnetic properties of absorbent coatings can be predicted using machine learning.  This study classifies the reflection loss of Hemp-Alumina Composite using Random Forest, ANN, KNN, Logistic Regression, and Decision Tree. These machine learning classifiers are able to generate predictions immediately and can learn critical spectral properties across a wide energy range without the influence of data human bias. The frequency range of 2-12 GHz was used for the measurements.  Hemp-Alumina composite has result that the most effective structure thickness is 5mm, used as a RAM with optimum absorption in S-Band frequencies of -15,158 dB, C-Band of -16,398 dB and X-Band of -23,135 dB. The highest and optimum reflection loss value is found in the X-Band frequency with a thickness of 5mm which is equal to -23.135 dB with an absorption bandwidth of 1000 MHz and efficiencyof 93.1%. From this result, it is proven that Hemp-Alumina Composite is very effective to be used as a RAM on X-Band frequency.  Based on the results of the experiments, the Random Forest Classifier has the highest values of accuracy (0.97) and F1 score (0.98). The F1 score and accuracy of Random Forest are 0.96 and 0.97, respectively, and do not significantly differ from KNN. ","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87207088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using Various Convolutional Neural Network to Detect Pneumonia from Chest X-Ray Images: A Systematic Literature Review 利用各种卷积神经网络从胸部x线图像中检测肺炎:系统的文献综述
Q3 Decision Sciences Pub Date : 2023-05-08 DOI: 10.30630/joiv.7.2.1015
Darnell Kikoo, Bryan Tamin, Stephen Hardjadilaga, -. Anderies, Irene Anindaputri Iswanto
Pneumonia is one of the world's top causes of mortality, especially for children. Chest X-rays serve an important part in diagnosing pneumonia due to the cost-effectiveness and quick advancement of the technology. Detecting Pneumonia through Chest X-rays (CXR) is a challenging and time-consuming process requiring trained professionals. This issue has been solved by the development of automation technology which is machine learning. Moreover, Deep Learning (DL), a machine learning specification that uses an algorithm that resembles the human brain, can predict more accurately and is now dependable enough to predict pneumonia. As time passes, another Deep Learning improvement has been made to produce a new method called Transfer Learning, that is done by extracting specific layers from some pre-trained network to be used on other datasets, which reduces the training time and improves the model performance. Although numerous algorithms are already available for pneumonia identification, a comprehensive literature evaluation and clinical recommendations are still small in numbers. This research will assist practitioners in choosing some of the best procedures from the recent research, reviewing the available datasets, and comprehending the outcomes gained in this domain. The reviewed papers show that the best score for predicting pneumonia using DL from CXR was 99.4% accuracy. The exceptional techniques and results from the reviewed papers served as great references for future research.
肺炎是世界上最主要的死亡原因之一,尤其是儿童。胸部x光由于其成本效益和技术的快速发展,在诊断肺炎方面发挥着重要作用。通过胸部x光(CXR)检测肺炎是一个具有挑战性和耗时的过程,需要训练有素的专业人员。自动化技术即机器学习的发展已经解决了这个问题。此外,深度学习(DL)是一种机器学习规范,它使用类似于人类大脑的算法,可以更准确地预测,现在已经足够可靠,可以预测肺炎。随着时间的推移,另一种深度学习的改进产生了一种称为迁移学习的新方法,该方法通过从一些预训练的网络中提取特定层来用于其他数据集,从而减少了训练时间并提高了模型性能。虽然已经有许多算法可用于肺炎识别,但全面的文献评估和临床推荐数量仍然很少。这项研究将帮助从业者从最近的研究中选择一些最好的程序,回顾可用的数据集,并理解在该领域获得的结果。回顾的论文显示,利用CXR的DL预测肺炎的准确率最高为99.4%。这些论文的特殊技术和结果为今后的研究提供了很好的参考。
{"title":"Using Various Convolutional Neural Network to Detect Pneumonia from Chest X-Ray Images: A Systematic Literature Review","authors":"Darnell Kikoo, Bryan Tamin, Stephen Hardjadilaga, -. Anderies, Irene Anindaputri Iswanto","doi":"10.30630/joiv.7.2.1015","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1015","url":null,"abstract":"Pneumonia is one of the world's top causes of mortality, especially for children. Chest X-rays serve an important part in diagnosing pneumonia due to the cost-effectiveness and quick advancement of the technology. Detecting Pneumonia through Chest X-rays (CXR) is a challenging and time-consuming process requiring trained professionals. This issue has been solved by the development of automation technology which is machine learning. Moreover, Deep Learning (DL), a machine learning specification that uses an algorithm that resembles the human brain, can predict more accurately and is now dependable enough to predict pneumonia. As time passes, another Deep Learning improvement has been made to produce a new method called Transfer Learning, that is done by extracting specific layers from some pre-trained network to be used on other datasets, which reduces the training time and improves the model performance. Although numerous algorithms are already available for pneumonia identification, a comprehensive literature evaluation and clinical recommendations are still small in numbers. This research will assist practitioners in choosing some of the best procedures from the recent research, reviewing the available datasets, and comprehending the outcomes gained in this domain. The reviewed papers show that the best score for predicting pneumonia using DL from CXR was 99.4% accuracy. The exceptional techniques and results from the reviewed papers served as great references for future research.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79370994","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of the Compatibility of TRMM Satellite Data with Precipitation Observation Data TRMM卫星资料与降水观测资料的兼容性评价
Q3 Decision Sciences Pub Date : 2023-05-07 DOI: 10.30630/joiv.7.2.1578
N. Nurhamidah, Rafika Andari, A. Junaidi, D. Daoed
The availability of hydrological data is one of the challenges associated with developing water infrastructure in different areas. This led to the TRMM (Tropical Precipitation Measurement Mission) design by NASA, which involves using satellite weather monitoring technology to monitor and analyze tropical precipitation in different parts of the world. Therefore, this validation study was conducted to compare TRMM precipitation data with observed precipitation to determine its application as an alternate source of hydrological data. The Kuranji watershed was selected as the study site due to the availability of suitable data. Moreover, the validation analyses applied include the Root Mean Squared Error (RMSE), Nash-Sutcliffe Efficiency (NSE), Coefficient Correlation (R), and Relative Error (RE). These used two calculation forms: one for the uncorrected data and another for the corrected data. The results showed that the best-adjusted data validation from the Gunung Nago station in 2016 was recorded to be RMSE = 62,298, NSE = 0.044, R = 0.902, and RE = 11,328. The closeness of the R-value to one implies that the corrected TRMM data outperforms the uncorrected ones. Therefore, it was generally concluded that the TRMM data matches the observed precipitation data and can be used for hydrological study in the Kuranji watershed
水文数据的可用性是在不同地区发展水利基础设施的挑战之一。这导致了NASA设计的TRMM(热带降水测量任务),该任务涉及使用卫星天气监测技术来监测和分析世界不同地区的热带降水。因此,本验证研究将TRMM降水数据与观测降水进行比较,以确定其作为水文数据替代来源的应用。由于有合适的数据,选择库兰吉流域作为研究地点。此外,应用的验证分析包括均方根误差(RMSE)、纳什-苏特克利夫效率(NSE)、相关系数(R)和相对误差(RE)。它们使用两种计算形式:一种用于未校正数据,另一种用于校正数据。结果表明,2016年Gunung Nago站调整后的最佳数据验证值为RMSE = 62,298, NSE = 0.044, R = 0.902, RE = 11,328。r值接近1意味着校正后的TRMM数据优于未校正的TRMM数据。因此,一般认为TRMM资料与实测降水资料吻合,可用于库兰吉流域的水文研究
{"title":"Evaluation of the Compatibility of TRMM Satellite Data with Precipitation Observation Data","authors":"N. Nurhamidah, Rafika Andari, A. Junaidi, D. Daoed","doi":"10.30630/joiv.7.2.1578","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1578","url":null,"abstract":"The availability of hydrological data is one of the challenges associated with developing water infrastructure in different areas. This led to the TRMM (Tropical Precipitation Measurement Mission) design by NASA, which involves using satellite weather monitoring technology to monitor and analyze tropical precipitation in different parts of the world. Therefore, this validation study was conducted to compare TRMM precipitation data with observed precipitation to determine its application as an alternate source of hydrological data. The Kuranji watershed was selected as the study site due to the availability of suitable data. Moreover, the validation analyses applied include the Root Mean Squared Error (RMSE), Nash-Sutcliffe Efficiency (NSE), Coefficient Correlation (R), and Relative Error (RE). These used two calculation forms: one for the uncorrected data and another for the corrected data. The results showed that the best-adjusted data validation from the Gunung Nago station in 2016 was recorded to be RMSE = 62,298, NSE = 0.044, R = 0.902, and RE = 11,328. The closeness of the R-value to one implies that the corrected TRMM data outperforms the uncorrected ones. Therefore, it was generally concluded that the TRMM data matches the observed precipitation data and can be used for hydrological study in the Kuranji watershed","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"64 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87234983","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Gamification of E-learning Environments for Learning Programming 编程学习电子学习环境的游戏化
Q3 Decision Sciences Pub Date : 2023-05-06 DOI: 10.30630/joiv.7.2.1602
Christian Garcia Villegas, Nilson Augusto Lemos Aguero
Gamification is the most active methodology utilized in the E-learning environment for teaching-learning in computing; however, this does not restrict its use in other areas of knowledge. Gamification combines elements of play and its design techniques in a non-ludic context, achieving a motivation factor for the students. This systematic study aimed to collect and synthesize scientific evidence from the gamification field for learning programming through the E-learning environment. In order to do this, a systematic literature review was done, following the guidelines proposed by Petersen, which propose the definition of questions, search strategies, inclusion/exclusion criteria, and characterization. As a result of this process, eighty-one works were completely reviewed, analyzed, and categorized. The results revealed favorable learning among the students, the most used platforms and gamification elements, the most used languages and focuses of programming, and the education level, where gamification is most used to learn to program in an E-learning environment. These findings evidenced that gamification is a good active strategy for introducing beginning students to programming through an E-learning environment. Within this context, Learning programming through the use of gamification is a topic that is growing and taking force, and after what occurred during the pandemic, it is projected that there will continue to be more students who are focused on understanding its implementation and the impact it has on the different levels of education and the areas of knowledge.
游戏化是电子学习环境中最活跃的计算机教学方法;然而,这并不限制它在其他知识领域的使用。游戏化将游戏元素和设计技巧结合在非搞笑的环境中,为学生提供了一种激励因素。本研究旨在收集并综合游戏化领域的科学证据,以支持电子学习环境下的编程学习。为了做到这一点,根据Petersen提出的指导方针,进行了系统的文献综述,其中提出了问题的定义,搜索策略,纳入/排除标准和特征。在这一过程中,81件作品被全面审查、分析和分类。结果显示了学生的良好学习,最常用的平台和游戏化元素,最常用的编程语言和重点,以及在电子学习环境中最常用游戏化来学习编程的教育水平。这些发现证明,游戏化是通过电子学习环境向初学者介绍编程的一个很好的主动策略。在此背景下,通过使用游戏化学习编程是一个日益增长和流行的主题,在大流行期间发生的事情之后,预计将继续有更多的学生专注于了解其实施情况及其对不同层次教育和知识领域的影响。
{"title":"The Gamification of E-learning Environments for Learning Programming","authors":"Christian Garcia Villegas, Nilson Augusto Lemos Aguero","doi":"10.30630/joiv.7.2.1602","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1602","url":null,"abstract":"Gamification is the most active methodology utilized in the E-learning environment for teaching-learning in computing; however, this does not restrict its use in other areas of knowledge. Gamification combines elements of play and its design techniques in a non-ludic context, achieving a motivation factor for the students. This systematic study aimed to collect and synthesize scientific evidence from the gamification field for learning programming through the E-learning environment. In order to do this, a systematic literature review was done, following the guidelines proposed by Petersen, which propose the definition of questions, search strategies, inclusion/exclusion criteria, and characterization. As a result of this process, eighty-one works were completely reviewed, analyzed, and categorized. The results revealed favorable learning among the students, the most used platforms and gamification elements, the most used languages and focuses of programming, and the education level, where gamification is most used to learn to program in an E-learning environment. These findings evidenced that gamification is a good active strategy for introducing beginning students to programming through an E-learning environment. Within this context, Learning programming through the use of gamification is a topic that is growing and taking force, and after what occurred during the pandemic, it is projected that there will continue to be more students who are focused on understanding its implementation and the impact it has on the different levels of education and the areas of knowledge.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"123 2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77450586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Survey on Forms of Visualization and Tools Used in Topic Modelling 主题建模中可视化的形式与工具研究
Q3 Decision Sciences Pub Date : 2023-05-05 DOI: 10.30630/joiv.7.2.1313
R. Maskat, S. M. Shaharudin, Deden Witarsyah, H. Mahdin
In this paper, we surveyed recent publications on topic modeling and analyzed the forms of visualizations and tools used. Expectedly, this information will help Natural Language Processing (NLP) researchers to make better decisions about which types of visualization are appropriate for them and which tools can help them. This could also spark further development of existing visualizations or the emergence of new visualizations if a gap is present. Topic modeling is an NLP technique used to identify topics hidden in a collection of documents. Visualizing these topics permits a faster understanding of the underlying subject matter in terms of its domain. This survey covered publications from 2017 to early 2022. The PRISMA methodology was used to review the publications. One hundred articles were collected, and 42 were found eligible for this study after filtration. Two research questions were formulated. The first question asks, "What are the different forms of visualizations used to display the result of topic modeling?" and the second question is "What visualization software or API is used? From our results, we discovered that different forms of visualizations meet different purposes of their display. We categorized them as maps, networks, evolution-based charts, and others. We also discovered that LDAvis is the most frequently used software/API, followed by the R language packages and D3.js. The primary limitation of this survey is it is not exhaustive. Hence, some eligible publications may not be included.
在本文中,我们调查了最近关于主题建模的出版物,并分析了可视化的形式和使用的工具。预计,这些信息将帮助自然语言处理(NLP)的研究人员更好地决定哪种可视化类型适合他们,哪些工具可以帮助他们。这也可能激发现有可视化的进一步发展,或者在存在差距的情况下出现新的可视化。主题建模是一种用于识别隐藏在文档集合中的主题的NLP技术。可视化这些主题可以更快地理解其领域中的潜在主题。该调查涵盖了2017年至2022年初的出版物。采用PRISMA方法审查出版物。收集100篇文献,经筛选筛选出42篇符合本研究条件。制定了两个研究问题。第一个问题是:“用于显示主题建模结果的可视化形式有哪些?”第二个问题是:“使用了哪些可视化软件或API ?”从我们的结果中,我们发现不同形式的可视化可以满足不同的显示目的。我们将它们分为地图、网络、基于进化的图表等。我们还发现,LDAvis是最常用的软件/API,其次是R语言包和D3.js。这项调查的主要限制是它不详尽。因此,一些符合条件的出版物可能不包括在内。
{"title":"A Survey on Forms of Visualization and Tools Used in Topic Modelling","authors":"R. Maskat, S. M. Shaharudin, Deden Witarsyah, H. Mahdin","doi":"10.30630/joiv.7.2.1313","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1313","url":null,"abstract":"In this paper, we surveyed recent publications on topic modeling and analyzed the forms of visualizations and tools used. Expectedly, this information will help Natural Language Processing (NLP) researchers to make better decisions about which types of visualization are appropriate for them and which tools can help them. This could also spark further development of existing visualizations or the emergence of new visualizations if a gap is present. Topic modeling is an NLP technique used to identify topics hidden in a collection of documents. Visualizing these topics permits a faster understanding of the underlying subject matter in terms of its domain. This survey covered publications from 2017 to early 2022. The PRISMA methodology was used to review the publications. One hundred articles were collected, and 42 were found eligible for this study after filtration. Two research questions were formulated. The first question asks, \"What are the different forms of visualizations used to display the result of topic modeling?\" and the second question is \"What visualization software or API is used? From our results, we discovered that different forms of visualizations meet different purposes of their display. We categorized them as maps, networks, evolution-based charts, and others. We also discovered that LDAvis is the most frequently used software/API, followed by the R language packages and D3.js. The primary limitation of this survey is it is not exhaustive. Hence, some eligible publications may not be included.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"68 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91152669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Face Recognition Using Convolution Neural Network Method with Discrete Cosine Transform Image for Login System 基于离散余弦变换的卷积神经网络人脸识别在登录系统中的应用
Q3 Decision Sciences Pub Date : 2023-05-05 DOI: 10.30630/joiv.7.2.1546
Ari Setiawan, R. Sigit, Rika Rokhana
These days, the application of image processing in computer vision is becoming more crucial. Some situations necessitate a solution based on computer vision and growing deep learning. One method continuously developed in deep learning is the Convolutional Neural Network, with MobileNet, EfficientNet, VGG16, and others being widely used architectures. Using the CNN architecture, the dataset consists primarily of images; the more datasets there are, the more image storage space will be required. Compression via the discrete cosine transform technique is a method to address this issue. We implement the DCT compression method in the present research to get around the system's limited storage space. Using DCT, we also compare compressed and uncompressed images. All users who had been trained with each test 5 times for a total of 150 tests were given the test. Based on testing findings, the size reduction rate for compressed and uncompressed images is measured at 25%. The case study presented is face recognition, and the training results indicate that the accuracy of compressed images using the DCT approach ranges from 91.33% to 100%. Still, the accuracy of uncompressed facial images ranges from 98.17% to 100%. In addition, the accuracy of the proposed CNN architecture has increased to 87.43%, while the accuracy of MobileNet has increased by 16.75%. The accuracy of EfficientNetB1 with noisy-student weights is measured at 74.91%, and the accuracy of EfficientNetB1 with imageNet weights can reach 100%. Facial biometric authentication using a deep learning algorithm and DCT-compressed images was successfully accomplished with an accuracy value of 95.33% and an error value of 4.67%.
近年来,图像处理在计算机视觉中的应用变得越来越重要。有些情况需要基于计算机视觉和不断发展的深度学习的解决方案。深度学习中不断发展的一种方法是卷积神经网络,MobileNet、effentnet、VGG16等架构被广泛使用。使用CNN架构,数据集主要由图像组成;数据集越多,需要的图像存储空间就越多。通过离散余弦变换技术的压缩是解决这个问题的一种方法。在本研究中,我们实现了DCT压缩方法,以绕过系统有限的存储空间。使用DCT,我们还比较了压缩和未压缩的图像。所有接受过5次测试训练,共计150次测试的用户都接受了测试。根据测试结果,压缩和未压缩图像的尺寸减小率为25%。以人脸识别为例,训练结果表明,采用DCT方法压缩图像的准确率在91.33% ~ 100%之间。尽管如此,未压缩面部图像的准确率在98.17%到100%之间。此外,本文提出的CNN架构的准确率提高到87.43%,而MobileNet的准确率提高了16.75%。使用噪声-学生权值测量的效率netb1的准确率为74.91%,使用imageNet权值测量的效率netb1的准确率可以达到100%。利用深度学习算法和dct压缩图像成功完成了人脸生物特征认证,准确率为95.33%,误差为4.67%。
{"title":"Face Recognition Using Convolution Neural Network Method with Discrete Cosine Transform Image for Login System","authors":"Ari Setiawan, R. Sigit, Rika Rokhana","doi":"10.30630/joiv.7.2.1546","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1546","url":null,"abstract":"These days, the application of image processing in computer vision is becoming more crucial. Some situations necessitate a solution based on computer vision and growing deep learning. One method continuously developed in deep learning is the Convolutional Neural Network, with MobileNet, EfficientNet, VGG16, and others being widely used architectures. Using the CNN architecture, the dataset consists primarily of images; the more datasets there are, the more image storage space will be required. Compression via the discrete cosine transform technique is a method to address this issue. We implement the DCT compression method in the present research to get around the system's limited storage space. Using DCT, we also compare compressed and uncompressed images. All users who had been trained with each test 5 times for a total of 150 tests were given the test. Based on testing findings, the size reduction rate for compressed and uncompressed images is measured at 25%. The case study presented is face recognition, and the training results indicate that the accuracy of compressed images using the DCT approach ranges from 91.33% to 100%. Still, the accuracy of uncompressed facial images ranges from 98.17% to 100%. In addition, the accuracy of the proposed CNN architecture has increased to 87.43%, while the accuracy of MobileNet has increased by 16.75%. The accuracy of EfficientNetB1 with noisy-student weights is measured at 74.91%, and the accuracy of EfficientNetB1 with imageNet weights can reach 100%. Facial biometric authentication using a deep learning algorithm and DCT-compressed images was successfully accomplished with an accuracy value of 95.33% and an error value of 4.67%.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"56 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85755116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Utilization of Business Analytics by SMEs In Halal Supply Chain Management Transactions 中小企业在清真供应链管理交易中的商业分析运用
Q3 Decision Sciences Pub Date : 2023-05-05 DOI: 10.30630/joiv.7.2.1308
S. Marjudi, Roziyani Setik, R. M. T. Raja Lope Ahmad, W. A. Wan Hassan, A. A. Md Kassim
Halal supply chain management has transformed beyond food and beverage certification. However, extant literature shows that Halal transaction management still has much to improve in terms of transaction permissibility, with the main gap in understanding Halal businesses and their transactions limited to a system that separately defines e-commerce and financial technology data into its IT business environment. This study aims to demonstrate the usefulness of managing Halal transactions and its permissibility analysis through a proposed Halal Supply Chain Management Transactions (HSCMT) model and prototype by applying a business analytic approach to integrate both e-commerce and financial technology data. The study uses literature analysis to ensure the correct structure of the integrated datasets, before modeling the transaction's permissibility and prototyping its analytics into decision-making analytics. The developed HSCMT prototype uses a payment gateway that can be embedded into a Halal SME owners' e-commerce site. This creates a holistic Halal Financial technology (FinTech) transaction permissibility dashboard, increasing the effectiveness of HSCMT for Malaysia Halal SME Owners (MHSO) by an average usability score of 83.67%. Results also indicate that the key basic mechanisms to verify transactional permissibility are the source of the transaction, the use of the transaction, transaction flow, and transaction agreement. Furthermore, its mechanisms must be mapped onto a submodule post-transformation and modeling of the transaction dataset. Further improvements in multisource data points can be further considered, as this research only focuses on local data points from one payment gateway service. This is due to restrictions in data policy when involving overseas supply chain and transaction documentation. This research utilizes available data in business through data management, optimization, mining, and visualization to measure performance and drive a company's growth. The competency of business analytics can be beneficial to Halal SMEs players because it can provide them with insights into the permissibility decision-making process.
清真供应链管理已经超越了食品和饮料认证。然而,现有文献表明,清真交易管理在交易许可方面仍有很多需要改进的地方,对清真业务及其交易的理解的主要差距仅限于将电子商务和金融技术数据单独定义到其IT业务环境中的系统。本研究旨在通过提出的清真供应链管理交易(HSCMT)模型和原型,通过应用商业分析方法整合电子商务和金融技术数据,展示管理清真交易及其许可分析的有用性。在对交易的许可进行建模并将其分析原型化为决策分析之前,该研究使用文献分析来确保集成数据集的正确结构。开发的HSCMT原型使用一个支付网关,可以嵌入到清真中小企业主的电子商务网站中。这创建了一个整体的清真金融技术(FinTech)交易许可仪表板,提高了马来西亚清真中小企业主(MHSO)的HSCMT的有效性,平均可用性得分为83.67%。结果还表明,验证事务许可的关键基本机制是事务的来源、事务的使用、事务流和事务协议。此外,它的机制必须映射到事务数据集转换和建模后的子模块上。由于本研究仅关注来自一个支付网关服务的本地数据点,因此可以进一步考虑多源数据点的进一步改进。这是由于涉及海外供应链和交易文档时数据政策的限制。本研究通过数据管理、优化、挖掘和可视化来利用商业中可用的数据来衡量绩效并推动公司的发展。商业分析能力对清真中小企业的参与者是有益的,因为它可以为他们提供对许可决策过程的见解。
{"title":"Utilization of Business Analytics by SMEs In Halal Supply Chain Management Transactions","authors":"S. Marjudi, Roziyani Setik, R. M. T. Raja Lope Ahmad, W. A. Wan Hassan, A. A. Md Kassim","doi":"10.30630/joiv.7.2.1308","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1308","url":null,"abstract":"Halal supply chain management has transformed beyond food and beverage certification. However, extant literature shows that Halal transaction management still has much to improve in terms of transaction permissibility, with the main gap in understanding Halal businesses and their transactions limited to a system that separately defines e-commerce and financial technology data into its IT business environment. This study aims to demonstrate the usefulness of managing Halal transactions and its permissibility analysis through a proposed Halal Supply Chain Management Transactions (HSCMT) model and prototype by applying a business analytic approach to integrate both e-commerce and financial technology data. The study uses literature analysis to ensure the correct structure of the integrated datasets, before modeling the transaction's permissibility and prototyping its analytics into decision-making analytics. The developed HSCMT prototype uses a payment gateway that can be embedded into a Halal SME owners' e-commerce site. This creates a holistic Halal Financial technology (FinTech) transaction permissibility dashboard, increasing the effectiveness of HSCMT for Malaysia Halal SME Owners (MHSO) by an average usability score of 83.67%. Results also indicate that the key basic mechanisms to verify transactional permissibility are the source of the transaction, the use of the transaction, transaction flow, and transaction agreement. Furthermore, its mechanisms must be mapped onto a submodule post-transformation and modeling of the transaction dataset. Further improvements in multisource data points can be further considered, as this research only focuses on local data points from one payment gateway service. This is due to restrictions in data policy when involving overseas supply chain and transaction documentation. This research utilizes available data in business through data management, optimization, mining, and visualization to measure performance and drive a company's growth. The competency of business analytics can be beneficial to Halal SMEs players because it can provide them with insights into the permissibility decision-making process.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87327166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Inversed Control Parameter in Whale Optimization Algorithm and Grey Wolf Optimizer for Wrapper-based Feature Selection: A comparative study 鲸鱼优化算法与灰狼优化算法的反控制参数在基于包装的特征选择中的比较研究
Q3 Decision Sciences Pub Date : 2023-05-05 DOI: 10.30630/joiv.7.2.1509
Liu Yab, Noorhaniza Wahid, Rahayu A Hamid
Whale Optimization Algorithm (WOA) and Grey Wolf Optimizer (GWO) are well-perform metaheuristic algorithms used by various researchers in solving feature selection problems. Yet, the slow convergence speed issue in Whale Optimization Algorithm and Grey Wolf Optimizer could demote the performance of feature selection and classification accuracy. Therefore, to overcome this issue, a modified WOA (mWOA) and modified GWO (mGWO) for wrapper-based feature selection were proposed in this study. The proposed mWOA and mGWO were given a new inversed control parameter which was expected to enable more search area for the search agents in the early phase of the algorithms and resulted in a faster convergence speed. The objective of this comparative study is to investigate and compare the effectiveness of the inversed control parameter in the proposed methods against the original algorithms in terms of the number of selected features and the classification accuracy. The proposed methods were implemented in MATLAB where 12 datasets with different dimensionality from the UCI repository were used. kNN was chosen as the classifier to evaluate the classification accuracy of the selected features. Based on the experimental results, mGWO did not show significant improvements in feature reduction and maintained similar accuracy as the original GWO. On the contrary, mWOA outperformed the original WOA in terms of the two criteria mentioned even on high-dimensional datasets. Evaluating the execution time of the proposed methods, utilizing different classifiers, and hybridizing proposed methods with other metaheuristic algorithms to solve feature selection problems would be future works worth exploring.
鲸鱼优化算法(WOA)和灰狼优化算法(GWO)是一种性能良好的元启发式算法,被许多研究人员用于解决特征选择问题。然而,鲸鱼优化算法和灰狼优化器的收敛速度慢的问题会降低特征选择的性能和分类精度。因此,为了克服这一问题,本研究提出了一种改进的WOA (mWOA)和改进的GWO (mGWO)用于基于包装器的特征选择。提出的mWOA和mGWO给出了一个新的逆控制参数,期望在算法的早期阶段为搜索代理提供更多的搜索区域,从而加快收敛速度。本比较研究的目的是调查和比较所提出方法中反控制参数与原始算法在选择特征数量和分类精度方面的有效性。在MATLAB中使用来自UCI存储库的12个不同维数的数据集实现了上述方法。选择kNN作为分类器来评估所选特征的分类精度。从实验结果来看,mGWO在特征约简方面没有明显的改善,并且保持了与原始GWO相似的精度。相反,即使在高维数据集上,mWOA在上述两个标准方面也优于原始WOA。评估所提出方法的执行时间,使用不同的分类器,以及将所提出的方法与其他元启发式算法混合来解决特征选择问题将是未来值得探索的工作。
{"title":"Inversed Control Parameter in Whale Optimization Algorithm and Grey Wolf Optimizer for Wrapper-based Feature Selection: A comparative study","authors":"Liu Yab, Noorhaniza Wahid, Rahayu A Hamid","doi":"10.30630/joiv.7.2.1509","DOIUrl":"https://doi.org/10.30630/joiv.7.2.1509","url":null,"abstract":"Whale Optimization Algorithm (WOA) and Grey Wolf Optimizer (GWO) are well-perform metaheuristic algorithms used by various researchers in solving feature selection problems. Yet, the slow convergence speed issue in Whale Optimization Algorithm and Grey Wolf Optimizer could demote the performance of feature selection and classification accuracy. Therefore, to overcome this issue, a modified WOA (mWOA) and modified GWO (mGWO) for wrapper-based feature selection were proposed in this study. The proposed mWOA and mGWO were given a new inversed control parameter which was expected to enable more search area for the search agents in the early phase of the algorithms and resulted in a faster convergence speed. The objective of this comparative study is to investigate and compare the effectiveness of the inversed control parameter in the proposed methods against the original algorithms in terms of the number of selected features and the classification accuracy. The proposed methods were implemented in MATLAB where 12 datasets with different dimensionality from the UCI repository were used. kNN was chosen as the classifier to evaluate the classification accuracy of the selected features. Based on the experimental results, mGWO did not show significant improvements in feature reduction and maintained similar accuracy as the original GWO. On the contrary, mWOA outperformed the original WOA in terms of the two criteria mentioned even on high-dimensional datasets. Evaluating the execution time of the proposed methods, utilizing different classifiers, and hybridizing proposed methods with other metaheuristic algorithms to solve feature selection problems would be future works worth exploring.","PeriodicalId":32468,"journal":{"name":"JOIV International Journal on Informatics Visualization","volume":"3 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-05-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84394806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
JOIV International Journal on Informatics Visualization
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1