首页 > 最新文献

International Journal of Information and Communication Technology最新文献

英文 中文
Ni máquina, ni humano ni disponible: Do College Admissions Offices Use Chatbots and Can They Speak Spanish? 你是máquina,你是人类,你是一次性的:大学招生办公室使用聊天机器人吗?它们会说西班牙语吗?
Q4 Computer Science Pub Date : 2022-08-22 DOI: 10.51548/joctec-2022-009
Z. W. Taylor, Linda Eguiluz, P. Wheeler
Colleges continue to use technology to connect students to information, but a research gap exists regarding how colleges use a ubiquitous technology in the business world: chatbots. Moreover, no work has addressed whether chatbots address Spanish-speaking students seeking higher education in the form of automated (AI) chatbot responses in Spanish or Spanish-programmed chatbots. This study randomly sampled 331 United States institutions of higher education to learn if these institutions embed chatbots on their undergraduate admissions websites and if these chatbots have been programmed to speak Spanish. Results suggest 21% of institutions (n=71) embed chatbots into their admissions websites and only 28% of those chatbots (n= 20) were programmed to provide Spanish-language admissions information. Implications for college access and equity for English learners and L1 Spanish speakers are addressed.
大学继续使用技术将学生与信息联系起来,但关于大学如何使用在商业世界中无处不在的技术——聊天机器人,研究存在差距。此外,目前还没有研究表明,聊天机器人是通过自动(AI)的西班牙语聊天机器人回应,还是通过西班牙语编程的聊天机器人,来解决说西班牙语的学生寻求高等教育的问题。这项研究随机抽样了331所美国高等教育机构,以了解这些机构是否在其本科招生网站上嵌入了聊天机器人,以及这些聊天机器人是否被编程为会说西班牙语。结果表明,21%的院校(n=71)在招生网站中嵌入了聊天机器人,其中只有28%的聊天机器人(n= 20)被编程为提供西班牙语招生信息。对英语学习者和母语西班牙语者的大学入学机会和公平的影响进行了讨论。
{"title":"Ni máquina, ni humano ni disponible: Do College Admissions Offices Use Chatbots and Can They Speak Spanish?","authors":"Z. W. Taylor, Linda Eguiluz, P. Wheeler","doi":"10.51548/joctec-2022-009","DOIUrl":"https://doi.org/10.51548/joctec-2022-009","url":null,"abstract":"Colleges continue to use technology to connect students to information, but a research gap exists regarding how colleges use a ubiquitous technology in the business world: chatbots. Moreover, no work has addressed whether chatbots address Spanish-speaking students seeking higher education in the form of automated (AI) chatbot responses in Spanish or Spanish-programmed chatbots. This study randomly sampled 331 United States institutions of higher education to learn if these institutions embed chatbots on their undergraduate admissions websites and if these chatbots have been programmed to speak Spanish. Results suggest 21% of institutions (n=71) embed chatbots into their admissions websites and only 28% of those chatbots (n= 20) were programmed to provide Spanish-language admissions information. Implications for college access and equity for English learners and L1 Spanish speakers are addressed.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"2 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-08-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87053565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
ADAPTIVE INITIAL CONTOUR AND PARTLY-NORMALIZATION ALGORITHM FOR IRIS SEGMENTATION OF BLURRY IRIS IMAGES 模糊虹膜图像分割的自适应初始轮廓和部分归一化算法
Q4 Computer Science Pub Date : 2022-07-17 DOI: 10.32890/jict2022.21.3.5
Shahrizan Jamaludin, A. F. Mohamad Ayob, Syamimi Mohd Norzeli, S. Mohamed
Iris segmentation is a process to isolate the accurate iris region from the eye image for iris recognition. Iris segmentation on non-ideal and noisy iris images is accurate with active contour. Nevertheless, it is currently unclear on how active contour responds to blurry iris images or motion blur, which presents a significant obstacle in iris segmentation. Investigation on blurry iris images, especially on the initial contour position, is rarely published and must be clarified. Moreover, evolution or convergence speed remains a significant challenge for active contour as it segments the precise iris boundary. Therefore, this study carried out experiments to achieve an efficient iris segmentation algorithm in terms of accuracy and fast execution, according to the aforementioned concerns. In addition, initial contour was explored to clarify its position. In order to accomplish these goals, the Wiener filter and morphological closing were used for preprocessing and reflection removal. Next, the adaptive initial contour (AIC), δ, and stopping function were integrated to create the adaptive Chan-Vese active contour (ACVAC) algorithm. Finally, the partly -normalization method for normalization and feature extraction was designed by selecting the most prominent iris features. The findings revealed that the algorithm outperformed the other active contour-based approaches in computational time and segmentation accuracy. It proved that in blurry iris images, the accurate initial contour position could be established. This algorithm is significant to solve inaccurate segmentation on blurry iris images.
虹膜分割是将准确的虹膜区域从人眼图像中分离出来进行虹膜识别的过程。利用活动轮廓对非理想和噪声虹膜图像进行精确分割。然而,目前尚不清楚活动轮廓对模糊虹膜图像或运动模糊图像的反应,这是虹膜分割的一个重大障碍。对模糊虹膜图像的研究,特别是对初始轮廓位置的研究,很少发表,必须澄清。此外,活动轮廓在分割精确的虹膜边界时,其演化或收敛速度仍然是一个重大挑战。因此,根据上述问题,本研究进行了实验,以期在准确性和执行速度方面实现高效的虹膜分割算法。并对初始轮廓线进行探索,明确其位置。为了实现这些目标,采用维纳滤波和形态闭合进行预处理和去除反射。然后,将自适应初始轮廓(AIC)、δ和停止函数相结合,形成自适应Chan-Vese活动轮廓(ACVAC)算法。最后,通过选取最显著的虹膜特征,设计了归一化和特征提取的部分归一化方法。结果表明,该算法在计算时间和分割精度方面优于其他基于活动轮廓的方法。实验证明,在模糊的虹膜图像中,可以建立精确的初始轮廓位置。该算法对于解决模糊虹膜图像分割不准确的问题具有重要意义。
{"title":"ADAPTIVE INITIAL CONTOUR AND PARTLY-NORMALIZATION ALGORITHM FOR IRIS SEGMENTATION OF BLURRY IRIS IMAGES","authors":"Shahrizan Jamaludin, A. F. Mohamad Ayob, Syamimi Mohd Norzeli, S. Mohamed","doi":"10.32890/jict2022.21.3.5","DOIUrl":"https://doi.org/10.32890/jict2022.21.3.5","url":null,"abstract":"Iris segmentation is a process to isolate the accurate iris region from the eye image for iris recognition. Iris segmentation on non-ideal and noisy iris images is accurate with active contour. Nevertheless, it is currently unclear on how active contour responds to blurry iris images or motion blur, which presents a significant obstacle in iris segmentation. Investigation on blurry iris images, especially on the initial contour position, is rarely published and must be clarified. Moreover, evolution or convergence speed remains a significant challenge for active contour as it segments the precise iris boundary. Therefore, this study carried out experiments to achieve an efficient iris segmentation algorithm in terms of accuracy and fast execution, according to the aforementioned concerns. In addition, initial contour was explored to clarify its position. In order to accomplish these goals, the Wiener filter and morphological closing were used for preprocessing and reflection removal. Next, the adaptive initial contour (AIC), δ, and stopping function were integrated to create the adaptive Chan-Vese active contour (ACVAC) algorithm. Finally, the partly -normalization method for normalization and feature extraction was designed by selecting the most prominent iris features. The findings revealed that the algorithm outperformed the other active contour-based approaches in computational time and segmentation accuracy. It proved that in blurry iris images, the accurate initial contour position could be established. This algorithm is significant to solve inaccurate segmentation on blurry iris images.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82028345","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
RECENT TRENDS OF MACHINE LEARNING PREDICTIONS USING OPEN DATA: A SYSTEMATIC REVIEW 使用开放数据的机器学习预测的最新趋势:系统回顾
Q4 Computer Science Pub Date : 2022-07-17 DOI: 10.32890/jict2022.21.3.3
N. Ismail, U. K. Yusof
Machine learning (ML) prediction determinants based on open data (OD) are investigated in this work, which is accomplished by examining current research trends over ten years. Currently, OD is commonly regarded as the most crucial trend for users to improve their ability to make decisions, particularly to the exponential expansion of social networking sites (SNSs) and open government data (OGD).The purpose of this study was to examine if there was an increase in the usage of OD in ML prediction techniques by conducting a systematic literature review (SLR) of the results of the trends. The papers published in major online scientific databases between 2011 and 2020, including ScienceDirect, Scopus, IEEE Xplore, ACM, and Springer, were identified and analysed. After various selection and Springer, were identified and analysed. After various selection processes, according to SLR based on precise inclusion and exclusion criteria, a total of 302 articles were located. However, only 81 of them were included. The findings were presented and plotted based on the research questions (RQs). In conclusion, this research could be beneficial to organisations, practitioners, and researchers by providing information on current trends in the implementation of ML prediction using OD setting by mapping studies based on the RQs designed, the most recent growth, and the necessity for future research based on the findings.
在这项工作中,研究了基于开放数据(OD)的机器学习(ML)预测决定因素,这是通过检查十年来的当前研究趋势来完成的。当前,OD被普遍认为是用户提高决策能力的最关键趋势,尤其是社交网站(sns)和政府开放数据(OGD)的指数级扩张。本研究的目的是通过对趋势结果进行系统的文献回顾(SLR)来检查是否有增加使用OD的ML预测技术。对2011年至2020年间发表在主要在线科学数据库(包括ScienceDirect、Scopus、IEEE explore、ACM和b施普林格)上的论文进行了识别和分析。经过各种筛选和施普林格,鉴定和分析。经过各种筛选过程,根据SLR基于精确的纳入和排除标准,共找到302篇文章。然而,其中只有81人被列入名单。研究结果是根据研究问题(RQs)提出和绘制的。总之,这项研究可以为组织、从业者和研究人员提供有关使用OD设置实现ML预测的当前趋势的信息,这些信息是基于设计的rq、最新的增长以及基于研究结果的未来研究的必要性。
{"title":"RECENT TRENDS OF MACHINE LEARNING PREDICTIONS USING OPEN DATA: A SYSTEMATIC REVIEW","authors":"N. Ismail, U. K. Yusof","doi":"10.32890/jict2022.21.3.3","DOIUrl":"https://doi.org/10.32890/jict2022.21.3.3","url":null,"abstract":"Machine learning (ML) prediction determinants based on open data (OD) are investigated in this work, which is accomplished by examining current research trends over ten years. Currently, OD is commonly regarded as the most crucial trend for users to improve their ability to make decisions, particularly to the exponential expansion of social networking sites (SNSs) and open government data (OGD).The purpose of this study was to examine if there was an increase in the usage of OD in ML prediction techniques by conducting a systematic literature review (SLR) of the results of the trends. The papers published in major online scientific databases between 2011 and 2020, including ScienceDirect, Scopus, IEEE Xplore, ACM, and Springer, were identified and analysed. After various selection and Springer, were identified and analysed. After various selection processes, according to SLR based on precise inclusion and exclusion criteria, a total of 302 articles were located. However, only 81 of them were included. The findings were presented and plotted based on the research questions (RQs). In conclusion, this research could be beneficial to organisations, practitioners, and researchers by providing information on current trends in the implementation of ML prediction using OD setting by mapping studies based on the RQs designed, the most recent growth, and the necessity for future research based on the findings.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73863956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A MACHINE LEARNING CLASSIFICATION APPROACH TO DETECT TLS-BASED MALWARE USING ENTROPY-BASED FLOW SET FEATURES 使用基于熵的流集特征检测基于tls的恶意软件的机器学习分类方法
Q4 Computer Science Pub Date : 2022-07-17 DOI: 10.32890/jict2022.21.3.1
Kinan Keshkeh, A. Jantan, Kamal Alieyan
Transport Layer Security (TLS) based malware is one of the most hazardous malware types, as it relies on encryption to conceal connections. Due to the complexity of TLS traffic decryption, several anomaly-based detection studies have been conducted to detect TLS-based malware using different features and machine learning (ML) algorithms. However, most of these studies utilized flow features with no feature transformation or relied on inefficient flow feature transformations like frequency-based periodicity analysis and outliers percentage. This paper introduces TLSMalDetect, a TLS-based malware detection approach that integrates periodicity-independent entropy-based flow set (EFS) features generated by a flow feature transformation technique to solve flow feature utilization issues in related research. EFS features effectiveness was evaluated in two ways: (1) by comparing them to the corresponding outliers percentage and flow features using four feature importance methods, and (2) by analyzing classification performance with and without EFS features. Moreover, new Transmission Control Protocol features not explored in literature were incorporated into TLSMalDetect, and their contribution was assessed. This study’s results proved EFS features of the number of packets sent and received were superior to related outliers percentage and flow features and could remarkably increase the performance up to ~42% in the case of Support Vector Machine accuracy. Furthermore, using the basic features, TLSMalDetect achieved the highest accuracy of 93.69% by Naïve Bayes (NB) among the ML algorithms applied. Also, from a comparison view, TLSMalDetect’s Random Forest precision of 98.99% and NB recall of 92.91% exceeded the best relevant findings of previous studies. These comparative results demonstrated the TLSMalDetect’s ability to detect more malware flows out of total malicious flows than existing works. It could also generate more actual alerts from overall alerts than earlier research.Transport Layer Security (TLS) based malware is one of the most hazardous malware types, as it relies on encryption to conceal connections. Due to the complexity of TLS traffic decryption, several anomaly-based detection studies have been conducted to detect TLS-based malware using different features and machine learning (ML) algorithms. However, most of these studies utilized flow features with no feature transformation or relied on inefficient flow feature transformations like frequency-based periodicity analysis and outliers percentage. This paper introduces TLSMalDetect, a TLS-based malware detection approach that integrates periodicity-independent entropy-based flow set (EFS) features generated by a flow feature transformation technique to solve flow feature utilization issues in related research. EFS features effectiveness was evaluated in two ways: (1) by comparing them to the corresponding outliers percentage and flow features using four feature importance methods, and (
基于传输层安全(TLS)的恶意软件是最危险的恶意软件类型之一,因为它依赖于加密来隐藏连接。由于TLS流量解密的复杂性,已经进行了一些基于异常的检测研究,使用不同的特征和机器学习(ML)算法来检测基于TLS的恶意软件。然而,这些研究大多利用了没有特征变换的流动特征,或者依赖于低效的流动特征变换,如基于频率的周期性分析和离群值百分比。本文介绍了一种基于tls的恶意软件检测方法tlsmalldetect,该方法集成了由流特征变换技术生成的基于周期无关熵的流集(EFS)特征,解决了相关研究中的流特征利用问题。通过两种方法评估EFS特征的有效性:(1)使用四种特征重要性方法将其与相应的异常值百分比和流量特征进行比较,(2)分析有无EFS特征的分类性能。此外,将文献中未探讨的新传输控制协议特性纳入tlsmalldetect,并评估其贡献。本研究的结果证明,在支持向量机准确率的情况下,发送和接收数据包数量的EFS特征优于相关的异常值百分比和流量特征,可以显著提高性能,最高可达42%。此外,利用这些基本特征,使用Naïve贝叶斯(NB)的tlsmalldetect在所应用的ML算法中达到了最高的93.69%的准确率。同时,从对比来看,tlsmalldetect的随机森林精度为98.99%,NB召回率为92.91%,超过了以往研究的最佳相关结果。这些比较结果表明,与现有的工作相比,tlsmalldetect能够从总的恶意流量中检测出更多的恶意流量。它还可以从总体警报中产生比早期研究更多的实际警报。基于传输层安全(TLS)的恶意软件是最危险的恶意软件类型之一,因为它依赖于加密来隐藏连接。由于TLS流量解密的复杂性,已经进行了一些基于异常的检测研究,使用不同的特征和机器学习(ML)算法来检测基于TLS的恶意软件。然而,这些研究大多利用了没有特征变换的流动特征,或者依赖于低效的流动特征变换,如基于频率的周期性分析和离群值百分比。本文介绍了一种基于tls的恶意软件检测方法tlsmalldetect,该方法集成了由流特征变换技术生成的基于周期无关熵的流集(EFS)特征,解决了相关研究中的流特征利用问题。通过两种方法评估EFS特征的有效性:(1)使用四种特征重要性方法将其与相应的异常值百分比和流量特征进行比较,(2)分析有无EFS特征的分类性能。此外,将文献中未探讨的新传输控制协议特性纳入tlsmalldetect,并评估其贡献。本研究的结果证明,在支持向量机准确率的情况下,发送和接收数据包数量的EFS特征优于相关的异常值百分比和流量特征,可以显著提高性能,最高可达42%。此外,利用这些基本特征,使用Naïve贝叶斯(NB)的tlsmalldetect在所应用的ML算法中达到了最高的93.69%的准确率。同时,从对比来看,tlsmalldetect的随机森林精度为98.99%,NB召回率为92.91%,超过了以往研究的最佳相关结果。这些比较结果表明,与现有的工作相比,tlsmalldetect能够从总的恶意流量中检测出更多的恶意流量。它还可以从总体警报中产生比早期研究更多的实际警报。
{"title":"A MACHINE LEARNING CLASSIFICATION APPROACH TO DETECT TLS-BASED MALWARE USING ENTROPY-BASED FLOW SET FEATURES","authors":"Kinan Keshkeh, A. Jantan, Kamal Alieyan","doi":"10.32890/jict2022.21.3.1","DOIUrl":"https://doi.org/10.32890/jict2022.21.3.1","url":null,"abstract":"Transport Layer Security (TLS) based malware is one of the most hazardous malware types, as it relies on encryption to conceal connections. Due to the complexity of TLS traffic decryption, several anomaly-based detection studies have been conducted to detect TLS-based malware using different features and machine learning (ML) algorithms. However, most of these studies utilized flow features with no feature transformation or relied on inefficient flow feature transformations like frequency-based periodicity analysis and outliers percentage. This paper introduces TLSMalDetect, a TLS-based malware detection approach that integrates periodicity-independent entropy-based flow set (EFS) features generated by a flow feature transformation technique to solve flow feature utilization issues in related research. EFS features effectiveness was evaluated in two ways: (1) by comparing them to the corresponding outliers percentage and flow features using four feature importance methods, and (2) by analyzing classification performance with and without EFS features. Moreover, new Transmission Control Protocol features not explored in literature were incorporated into TLSMalDetect, and their contribution was assessed. This study’s results proved EFS features of the number of packets sent and received were superior to related outliers percentage and flow features and could remarkably increase the performance up to ~42% in the case of Support Vector Machine accuracy. Furthermore, using the basic features, TLSMalDetect achieved the highest accuracy of 93.69% by Naïve Bayes (NB) among the ML algorithms applied. Also, from a comparison view, TLSMalDetect’s Random Forest precision of 98.99% and NB recall of 92.91% exceeded the best relevant findings of previous studies. These comparative results demonstrated the TLSMalDetect’s ability to detect more malware flows out of total malicious flows than existing works. It could also generate more actual alerts from overall alerts than earlier research.Transport Layer Security (TLS) based malware is one of the most hazardous malware types, as it relies on encryption to conceal connections. Due to the complexity of TLS traffic decryption, several anomaly-based detection studies have been conducted to detect TLS-based malware using different features and machine learning (ML) algorithms. However, most of these studies utilized flow features with no feature transformation or relied on inefficient flow feature transformations like frequency-based periodicity analysis and outliers percentage. This paper introduces TLSMalDetect, a TLS-based malware detection approach that integrates periodicity-independent entropy-based flow set (EFS) features generated by a flow feature transformation technique to solve flow feature utilization issues in related research. EFS features effectiveness was evaluated in two ways: (1) by comparing them to the corresponding outliers percentage and flow features using four feature importance methods, and (","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73017278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
COMPARATIVE PERFORMANCE EVALUATION OF EFFICIENCY FOR HIGH DIMENSIONAL CLASSIFICATION METHODS 高维分类方法效率的比较性能评价
Q4 Computer Science Pub Date : 2022-07-17 DOI: 10.32890/jict2022.21.3.6
F. Okwonu, N. Ahad, N. Ogini, I. Okoloko, W. Z. Wan Husin
This paper aimed to determine the efficiency of classifiers for high-dimensional classification methods. It also investigated whether an extreme minimum misclassification rate translates into robust efficiency. To ensure an acceptable procedure, a benchmark evaluation threshold (BETH) was proposed as a metric to analyze the comparative performance for high-dimensional classification methods. A simplified performance metric was derived to show the efficiency of different classification methods. To achieve the objectives, the existing probability of correct classification (PCC) or classification accuracy reported in five different articles was used to generate the BETH value. Then, a comparative analysis was performed between the application of BETH value and the well-established PCC value ,derived from the confusion matrix. The analysis indicated that the BETH procedure had a minimum misclassification rate, unlike the Optimal method. The results also revealed that as the PCC inclined toward unity value, the misclassification rate between the two methods (BETH and PCC) became extremely irrelevant. The study revealed that the BETH method was invariant to the performance established by the classifiers using the PCC criterion but demonstrated more relevant aspects of robustness and minimum misclassification rate as compared to the PCC method. In addition, the comparative analysis affirmed that the BETH method exhibited more robust efficiency than the Optimal method. The study concluded that a minimum misclassification rate yields robust performance efficiency.
本文旨在确定分类器在高维分类方法中的效率。它还研究了是否一个极端的最小误分类率转化为稳健的效率。为了保证过程的可接受性,提出了基准评价阈值(BETH)作为度量来分析高维分类方法的比较性能。推导了一个简化的性能指标,以显示不同分类方法的效率。为了达到目的,我们使用五篇不同文章中报告的现有正确分类概率(PCC)或分类准确率来生成BETH值。然后,对BETH值的应用与由混淆矩阵推导出的PCC值进行了比较分析。分析表明,与Optimal方法不同,BETH方法具有最小的误分类率。结果还显示,当PCC趋向于单位值时,两种方法(BETH和PCC)的误分类率变得极不相关。研究表明,BETH方法对使用PCC标准的分类器所建立的性能是不变的,但与PCC方法相比,BETH方法在鲁棒性和最小误分类率方面表现出更多相关方面。此外,对比分析证实了BETH方法比Optimal方法具有更高的鲁棒性效率。研究得出的结论是,最小的误分类率产生稳健的性能效率。
{"title":"COMPARATIVE PERFORMANCE EVALUATION OF EFFICIENCY FOR HIGH DIMENSIONAL CLASSIFICATION METHODS","authors":"F. Okwonu, N. Ahad, N. Ogini, I. Okoloko, W. Z. Wan Husin","doi":"10.32890/jict2022.21.3.6","DOIUrl":"https://doi.org/10.32890/jict2022.21.3.6","url":null,"abstract":"This paper aimed to determine the efficiency of classifiers for high-dimensional classification methods. It also investigated whether an extreme minimum misclassification rate translates into robust efficiency. To ensure an acceptable procedure, a benchmark evaluation threshold (BETH) was proposed as a metric to analyze the comparative performance for high-dimensional classification methods. A simplified performance metric was derived to show the efficiency of different classification methods. To achieve the objectives, the existing probability of correct classification (PCC) or classification accuracy reported in five different articles was used to generate the BETH value. Then, a comparative analysis was performed between the application of BETH value and the well-established PCC value ,derived from the confusion matrix. The analysis indicated that the BETH procedure had a minimum misclassification rate, unlike the Optimal method. The results also revealed that as the PCC inclined toward unity value, the misclassification rate between the two methods (BETH and PCC) became extremely irrelevant. The study revealed that the BETH method was invariant to the performance established by the classifiers using the PCC criterion but demonstrated more relevant aspects of robustness and minimum misclassification rate as compared to the PCC method. In addition, the comparative analysis affirmed that the BETH method exhibited more robust efficiency than the Optimal method. The study concluded that a minimum misclassification rate yields robust performance efficiency.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"74 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81634604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
IMPROVING E-COMMERCE APPLICATION THROUGH SENSE OF AGENCY OF A CALIBRATED INTERACTIVE VR APPLICATION 通过校准交互式虚拟现实应用的代理感来提升电子商务应用
Q4 Computer Science Pub Date : 2022-07-17 DOI: 10.32890/jict2022.21.3.2
Nurul Aiman Abdul Rahim, M. A. Norasikin, Z. Maksom
Virtual Reality (VR) technologies create and control different virtual world instead of the actual environment, and this contributes to the feeling of control known as the sense of agency (SoA). The SoA exists from the contrast between the expected sensory consequence of one’s action from efference copy and the real sensory effects. However, the size representation of objects differs between the physical and virtual world due to certain technical limitations, such as the VR application’s virtual hand not reflecting the user’s actual hand size. A limitation that will incur low quality of perception and SoA for digital application. Here, we proposed a proof-of-concept of an interactive e-commerce application that incorporates VR capability and size calibration mechanism. The mechanism uses a calibration method based on the reciprocal scale factor from the virtual object to its real counterpart. The study of the SoA focusing on user perception and interaction was done. The proposed method was tested on twenty two participants − who are also online shopping users. Nearly half of the participants (45%) buy online products frequently, at least one transaction per day. The outcome indicates that our proposed method improves 47% of user perception and interaction compared to the conventional e-commerce application with its static texts and images. Our proposed method is rudimentary yet effective and can be easily implemented in any digital field.
虚拟现实(VR)技术创造和控制不同的虚拟世界,而不是实际环境,这有助于产生控制感,即代理感(SoA)。SoA存在于一个人从感知拷贝中预期的感官结果与真实的感官效果之间的对比中。然而,由于某些技术限制,物体的大小表示在物理世界和虚拟世界之间是不同的,例如VR应用程序的虚拟手不能反映用户的实际手的大小。这一限制将导致数字应用程序的感知和SoA质量较低。在这里,我们提出了一个集成VR功能和尺寸校准机制的交互式电子商务应用程序的概念验证。该机构采用了一种基于从虚拟对象到真实对象的倒数比例因子的校准方法。以用户感知和交互为重点,对SoA进行了研究。该方法在22名参与者身上进行了测试,他们都是网上购物的用户。近一半的参与者(45%)经常在网上购买产品,每天至少进行一次交易。结果表明,与使用静态文本和图像的传统电子商务应用程序相比,我们提出的方法提高了47%的用户感知和交互。我们提出的方法是基本的,但有效的,可以很容易地实现在任何数字领域。
{"title":"IMPROVING E-COMMERCE APPLICATION THROUGH SENSE OF AGENCY OF A CALIBRATED INTERACTIVE VR APPLICATION","authors":"Nurul Aiman Abdul Rahim, M. A. Norasikin, Z. Maksom","doi":"10.32890/jict2022.21.3.2","DOIUrl":"https://doi.org/10.32890/jict2022.21.3.2","url":null,"abstract":"Virtual Reality (VR) technologies create and control different virtual world instead of the actual environment, and this contributes to the feeling of control known as the sense of agency (SoA). The SoA exists from the contrast between the expected sensory consequence of one’s action from efference copy and the real sensory effects. However, the size representation of objects differs between the physical and virtual world due to certain technical limitations, such as the VR application’s virtual hand not reflecting the user’s actual hand size. A limitation that will incur low quality of perception and SoA for digital application. Here, we proposed a proof-of-concept of an interactive e-commerce application that incorporates VR capability and size calibration mechanism. The mechanism uses a calibration method based on the reciprocal scale factor from the virtual object to its real counterpart. The study of the SoA focusing on user perception and interaction was done. The proposed method was tested on twenty two participants − who are also online shopping users. Nearly half of the participants (45%) buy online products frequently, at least one transaction per day. The outcome indicates that our proposed method improves 47% of user perception and interaction compared to the conventional e-commerce application with its static texts and images. Our proposed method is rudimentary yet effective and can be easily implemented in any digital field.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"40 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80664824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IMAGE-BASED OIL PALM LEAVES DISEASE DETECTION USING CONVOLUTIONAL NEURAL NETWORK 基于图像的油棕叶病卷积神经网络检测
Q4 Computer Science Pub Date : 2022-07-17 DOI: 10.32890/jict2022.21.3.4
Jia Heng Ong, P. Ong, Kiow Lee Woon
Over the years, numerous studies have been conducted on the integration of computer vision and machine learning in plant disease detection. However, these conventional machine learning methods often require the contour segmentation of the infected region from the entire leaf region and the manual extraction of different discriminative features before the classification models can be developed. In this study, deep learning models, specifically, the AlexNet convolutional neural network (CNN) and the combination of AlexNet and support vector machine (AlexNet-SVM), which overcome the limitation of handcrafting of feature representation were implemented for oil palm leaf disease identification. The images of healthy and infected leaf samples were collected, resized, and renamed before the model training. These images were directly used to fit the classification models, without the need for segmentation and feature extraction as in models, without the need for segmentation and feature extraction as in the conventional machine learning methods. The optimal architecture of AlexNet CNN and AlexNet-SVM models were then determined and subsequently applied for the oil palm leaf disease identification.Comparative studies showed that the overall performance of the AlexNet CNN model outperformed AlexNet-SVM-based classifier.
多年来,人们对计算机视觉和机器学习在植物病害检测中的结合进行了大量的研究。然而,这些传统的机器学习方法往往需要从整个叶片区域中对感染区域进行轮廓分割,并人工提取不同的判别特征,然后才能开发分类模型。本研究利用深度学习模型,即AlexNet卷积神经网络(CNN)和AlexNet与支持向量机(AlexNet- svm)的结合,克服了特征表示手工制作的局限性,实现了油棕叶病的识别。在模型训练之前,收集健康和感染叶片样本的图像,调整大小并重新命名。这些图像直接用于拟合分类模型,不需要像模型那样进行分割和特征提取,也不需要像传统的机器学习方法那样进行分割和特征提取。然后确定AlexNet CNN和AlexNet- svm模型的最优架构,并将其应用于油棕叶病的识别。对比研究表明,AlexNet CNN模型的整体性能优于基于AlexNet- svm的分类器。
{"title":"IMAGE-BASED OIL PALM LEAVES DISEASE DETECTION USING CONVOLUTIONAL NEURAL NETWORK","authors":"Jia Heng Ong, P. Ong, Kiow Lee Woon","doi":"10.32890/jict2022.21.3.4","DOIUrl":"https://doi.org/10.32890/jict2022.21.3.4","url":null,"abstract":"Over the years, numerous studies have been conducted on the integration of computer vision and machine learning in plant disease detection. However, these conventional machine learning methods often require the contour segmentation of the infected region from the entire leaf region and the manual extraction of different discriminative features before the classification models can be developed. In this study, deep learning models, specifically, the AlexNet convolutional neural network (CNN) and the combination of AlexNet and support vector machine (AlexNet-SVM), which overcome the limitation of handcrafting of feature representation were implemented for oil palm leaf disease identification. The images of healthy and infected leaf samples were collected, resized, and renamed before the model training. These images were directly used to fit the classification models, without the need for segmentation and feature extraction as in models, without the need for segmentation and feature extraction as in the conventional machine learning methods. The optimal architecture of AlexNet CNN and AlexNet-SVM models were then determined and subsequently applied for the oil palm leaf disease identification.Comparative studies showed that the overall performance of the AlexNet CNN model outperformed AlexNet-SVM-based classifier.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90681580","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Hybrid K-Means Hierarchical Algorithm for Natural Disaster Mitigation Clustering 自然灾害减灾聚类的混合k -均值分层算法
Q4 Computer Science Pub Date : 2022-04-07 DOI: 10.32890/jict2022.21.2.2
Abdurrakhman Prasetyadi, Budi Nugroho, A. Tohari
Cluster methods such as k-means have been widely used to group areas with a relatively equal number of disasters to determine areas prone to natural disasters. Nevertheless, it is dificult to obtain a homogeneous clustering result of the k-means method because this method is sensitive to a random selection of the centers of the cluster. This paper presents the result of a study that aimed to apply a proposed hybrid approach of the combined k-means algorithm and hierarchy to the clustering process of anticipation level datasets of natural disaster mitigation in Indonesia. This study also added keyword and disaster-type ields to provide additional information for a better clustering process. The clustering process produced three clusters for the anticipation level of natural disaster mitigation. Based on the validation from experts, 67 districts/cities (82.7%) fell into Cluster 1 (low anticipation), nine districts/cities (11.1%) were classiied into Cluster 2 (medium), and the remaining ive districts/cities (6.2%) were categorized in Cluster 3 (high anticipation). From the analysis of the calculation of the silhouette coeficient, the hybrid algorithm provided relatively homogeneous clustering results. Furthermore, applying the hybrid algorithm to the keyword segment and the type of disaster produced a homogeneous clustering as indicated by the calculated purity coeficient and the total purity values. Therefore, the proposed hybrid algorithm can provide relatively homogeneous clustering results in natural disaster mitigation.
k-means等聚类方法已被广泛用于对灾害数量相对相等的地区进行分组,以确定自然灾害易发地区。然而,由于k-means方法对聚类中心的随机选择很敏感,因此很难获得均匀的聚类结果。本文介绍了一项研究的结果,该研究旨在将拟议的k-均值算法和层次结构相结合的混合方法应用于印度尼西亚自然灾害缓解预期水平数据集的聚类过程。本研究还添加了关键字和灾害类型字段,为更好的集群过程提供了额外的信息。聚类过程为减轻自然灾害的预期水平产生了三个聚类。经专家验证,67个区/市(82.7%)归属于低预期聚类1,9个区/市(11.1%)归属于中等预期聚类2,其余5个区/市(6.2%)归属于高预期聚类3。从轮廓系数的计算分析来看,混合算法的聚类结果相对均匀。此外,将混合算法应用于关键字段和灾害类型产生同质聚类,由计算的纯度系数和总纯度值表示。因此,本文提出的混合算法能够提供相对均匀的自然灾害减灾聚类结果。
{"title":"A Hybrid K-Means Hierarchical Algorithm for Natural Disaster Mitigation Clustering","authors":"Abdurrakhman Prasetyadi, Budi Nugroho, A. Tohari","doi":"10.32890/jict2022.21.2.2","DOIUrl":"https://doi.org/10.32890/jict2022.21.2.2","url":null,"abstract":"Cluster methods such as k-means have been widely used to group areas with a relatively equal number of disasters to determine areas prone to natural disasters. Nevertheless, it is dificult to obtain a homogeneous clustering result of the k-means method because this method is sensitive to a random selection of the centers of the cluster. This paper presents the result of a study that aimed to apply a proposed hybrid approach of the combined k-means algorithm and hierarchy to the clustering process of anticipation level datasets of natural disaster mitigation in Indonesia. This study also added keyword and disaster-type ields to provide additional information for a better clustering process. The clustering process produced three clusters for the anticipation level of natural disaster mitigation. Based on the validation from experts, 67 districts/cities (82.7%) fell into Cluster 1 (low anticipation), nine districts/cities (11.1%) were classiied into Cluster 2 (medium), and the remaining ive districts/cities (6.2%) were categorized in Cluster 3 (high anticipation). From the analysis of the calculation of the silhouette coeficient, the hybrid algorithm provided relatively homogeneous clustering results. Furthermore, applying the hybrid algorithm to the keyword segment and the type of disaster produced a homogeneous clustering as indicated by the calculated purity coeficient and the total purity values. Therefore, the proposed hybrid algorithm can provide relatively homogeneous clustering results in natural disaster mitigation.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"23 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89740030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Opinion Triplet Extraction for Aspect-Based Sentiment Analysis Using Co-Extraction Approach 基于方面的情感分析的意见三联体抽取
Q4 Computer Science Pub Date : 2022-04-07 DOI: 10.32890/jict2022.21.2.5
Rifo Ahmad Genadi, M. L. Khodra
In aspect-based sentiment analysis, tasks are diverse and consist of aspect term extraction, aspect categorization, opinion term extraction, sentiment polarity classification, and relation extractions of aspect and opinion terms. These tasks are generally carried out sequentially using more than one model. However, this approach is inefficient and likely to reduce the model’s performance due to cumulative errors in previous processes. The co-extraction approach with Dual crOss-sharEd RNN (DOER) and span-based multitask acquired better performance than the pipelined approaches in English review data. Therefore, this research focuses on adapting the co-extraction approach where the extraction of aspect terms, opinion terms, and sentiment polarity are conducted simultaneously from review texts. The co-extraction approach was adapted by modifying the original frameworks to perform unhandled subtask to get the opinion triplet. Furthermore, the output layer on these frameworks was modified and trained using a collection of Indonesian-language hotel reviews. The adaptation was conducted by testing the output layer topology for aspect and opinion term extraction as well as variations in the type of recurrent neural network cells and model hyperparameters used, and then analysing the results to obtain a conclusion. The two proposed frameworks were able to carry out opinion triplet extraction and achieve decent performance. The DOER framework achieves better performance than the baselines on aspect and opinion term extraction tasks.
在基于方面的情感分析中,任务是多种多样的,包括方面术语提取、方面分类、意见术语提取、情感极性分类以及方面和意见术语的关系提取。这些任务通常使用多个模型依次执行。然而,这种方法是低效的,并且可能由于先前过程中的累积错误而降低模型的性能。双交叉共享RNN (DOER)和基于跨的多任务协同抽取方法在英语复习数据中取得了比流水线方法更好的抽取效果。因此,本研究的重点是采用从评论文本中同时提取方面术语、观点术语和情感极性的联合提取方法。通过修改原始框架,采用协同抽取方法执行未处理的子任务来获得意见三元组。此外,使用印尼语酒店评论的集合对这些框架的输出层进行了修改和训练。通过测试方面和意见项提取的输出层拓扑结构以及所使用的递归神经网络细胞类型和模型超参数的变化来进行自适应,然后分析结果得出结论。提出的两种框架都能够进行意见三元提取,并取得了良好的性能。DOER框架在方面和意见词提取任务上取得了比基线更好的性能。
{"title":"Opinion Triplet Extraction for Aspect-Based Sentiment Analysis Using Co-Extraction Approach","authors":"Rifo Ahmad Genadi, M. L. Khodra","doi":"10.32890/jict2022.21.2.5","DOIUrl":"https://doi.org/10.32890/jict2022.21.2.5","url":null,"abstract":"In aspect-based sentiment analysis, tasks are diverse and consist of aspect term extraction, aspect categorization, opinion term extraction, sentiment polarity classification, and relation extractions of aspect and opinion terms. These tasks are generally carried out sequentially using more than one model. However, this approach is inefficient and likely to reduce the model’s performance due to cumulative errors in previous processes. The co-extraction approach with Dual crOss-sharEd RNN (DOER) and span-based multitask acquired better performance than the pipelined approaches in English review data. Therefore, this research focuses on adapting the co-extraction approach where the extraction of aspect terms, opinion terms, and sentiment polarity are conducted simultaneously from review texts. The co-extraction approach was adapted by modifying the original frameworks to perform unhandled subtask to get the opinion triplet. Furthermore, the output layer on these frameworks was modified and trained using a collection of Indonesian-language hotel reviews. The adaptation was conducted by testing the output layer topology for aspect and opinion term extraction as well as variations in the type of recurrent neural network cells and model hyperparameters used, and then analysing the results to obtain a conclusion. The two proposed frameworks were able to carry out opinion triplet extraction and achieve decent performance. The DOER framework achieves better performance than the baselines on aspect and opinion term extraction tasks.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"273 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79985343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Selective Segmentation Model for Vector-Valued Images 矢量值图像的选择性分割模型
Q4 Computer Science Pub Date : 2022-04-07 DOI: 10.32890/jict2022.21.2.1
Noor Ain Syazwani Mohd Ghani, A. K. Jumaat
One of the most important steps in image processing and computer vision for image analysis is segmentation, which can be classified into global and selective segmentations. Global segmentation models can segment whole objects in an image. Unfortunately, these models are unable to segment a specific object that is required for extraction. To overcome this limitation, the selective segmentation model, which is capable of extracting a particular object or region in an image, must be prioritised. Recent selective segmentation models have shown to be effective in segmenting greyscale images. Nevertheless, if the input is vector-valued or identified as a colour image, the models simply ignore the colour information by converting that image into a greyscale format. Colour plays an important role in the interpretation of object boundaries within an image as it helps to provide a more detailed explanation of the scene’s objects. Therefore, in this research, a model for selective segmentation of vector-valued images is proposed by combining concepts from existing models. The finite difference method was used to solve the resulting Euler-Lagrange (EL) partial differential equation of the proposed model. The accuracy of the proposed model’s segmentation output was then assessed using visual observation as well as by using two similarity indices, namely the Jaccard (JSC) and Dice (DSC) similarity coefficients. Experimental results demonstrated that the proposed model is capable of successfully segmenting a specific object in vector-valued images. Future research on this area can be further extended in three-dimensional modelling.
图像分割是图像处理和计算机视觉图像分析中最重要的步骤之一,可分为全局分割和选择性分割。全局分割模型可以分割图像中的整个物体。不幸的是,这些模型无法分割提取所需的特定对象。为了克服这一限制,必须优先考虑能够提取图像中特定对象或区域的选择性分割模型。近年来的选择性分割模型在灰度图像分割中已被证明是有效的。然而,如果输入是矢量值或被识别为彩色图像,则模型通过将该图像转换为灰度格式来忽略颜色信息。颜色在图像中物体边界的解释中起着重要作用,因为它有助于提供对场景物体的更详细的解释。因此,本研究结合已有模型的概念,提出了一种矢量值图像的选择性分割模型。利用有限差分法求解了该模型的Euler-Lagrange偏微分方程。然后使用视觉观察以及使用两个相似指数,即Jaccard (JSC)和Dice (DSC)相似系数来评估所提出模型分割输出的准确性。实验结果表明,该模型能够成功地分割矢量值图像中的特定目标。未来在该领域的研究可以在三维建模中进一步拓展。
{"title":"Selective Segmentation Model for Vector-Valued Images","authors":"Noor Ain Syazwani Mohd Ghani, A. K. Jumaat","doi":"10.32890/jict2022.21.2.1","DOIUrl":"https://doi.org/10.32890/jict2022.21.2.1","url":null,"abstract":"One of the most important steps in image processing and computer vision for image analysis is segmentation, which can be classified into global and selective segmentations. Global segmentation models can segment whole objects in an image. Unfortunately, these models are unable to segment a specific object that is required for extraction. To overcome this limitation, the selective segmentation model, which is capable of extracting a particular object or region in an image, must be prioritised. Recent selective segmentation models have shown to be effective in segmenting greyscale images. Nevertheless, if the input is vector-valued or identified as a colour image, the models simply ignore the colour information by converting that image into a greyscale format. Colour plays an important role in the interpretation of object boundaries within an image as it helps to provide a more detailed explanation of the scene’s objects. Therefore, in this research, a model for selective segmentation of vector-valued images is proposed by combining concepts from existing models. The finite difference method was used to solve the resulting Euler-Lagrange (EL) partial differential equation of the proposed model. The accuracy of the proposed model’s segmentation output was then assessed using visual observation as well as by using two similarity indices, namely the Jaccard (JSC) and Dice (DSC) similarity coefficients. Experimental results demonstrated that the proposed model is capable of successfully segmenting a specific object in vector-valued images. Future research on this area can be further extended in three-dimensional modelling.","PeriodicalId":39396,"journal":{"name":"International Journal of Information and Communication Technology","volume":"96 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73053482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
International Journal of Information and Communication Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1