Pub Date : 2021-08-04DOI: 10.1080/24751839.2021.1950319
R. Bitner, N. Le
ABSTRACT The affective state of an individual can be determined using physiological parameters; an important metric that can then be extracted is attention. Looking more closely at compact EEGs, algorithms have been implemented in such devices that can measure the attention and other affective states of the user. No information about these algorithms is available; are these feature classification algorithms accurate? An experiment was conducted with 23 subjects who utilized a pedagogical agent to learn the syntax of the programming language Java while having their attention measured by the NeuroSky MindWave Mobile 2. Using a concurrent validity approach, the attention values measured were compared to band powers, as well as measures of task performance. The results of the experiment were in part successful and supportive of the claim that the EEG device’s attention algorithm does in fact represent a user’s attention accurately. The results of the analysis based on raw data captured from the device were consistent with previous literature. Inconclusive results were obtained relating to task performance and attention.
摘要个体的情感状态可以通过生理参数来确定;然后可以提取的一个重要度量是注意力。仔细观察紧凑型脑电图,算法已经在这种设备中实现,可以测量用户的注意力和其他情感状态。没有关于这些算法的信息;这些特征分类算法准确吗?对23名受试者进行了一项实验,他们使用教学代理学习编程语言Java的语法,同时通过NeuroSky MindWave Mobile 2测量他们的注意力。使用并发有效性方法,将测量的注意力值与带功率以及任务表现的测量值进行比较。实验结果在一定程度上是成功的,并支持了脑电图设备的注意力算法实际上准确地代表了用户的注意力的说法。基于从该设备获取的原始数据的分析结果与以前的文献一致。在任务表现和注意力方面获得了不明确的结果。
{"title":"Can EEG-devices differentiate attention values between incorrect and correct solutions for problem-solving tasks?","authors":"R. Bitner, N. Le","doi":"10.1080/24751839.2021.1950319","DOIUrl":"https://doi.org/10.1080/24751839.2021.1950319","url":null,"abstract":"ABSTRACT The affective state of an individual can be determined using physiological parameters; an important metric that can then be extracted is attention. Looking more closely at compact EEGs, algorithms have been implemented in such devices that can measure the attention and other affective states of the user. No information about these algorithms is available; are these feature classification algorithms accurate? An experiment was conducted with 23 subjects who utilized a pedagogical agent to learn the syntax of the programming language Java while having their attention measured by the NeuroSky MindWave Mobile 2. Using a concurrent validity approach, the attention values measured were compared to band powers, as well as measures of task performance. The results of the experiment were in part successful and supportive of the claim that the EEG device’s attention algorithm does in fact represent a user’s attention accurately. The results of the analysis based on raw data captured from the device were consistent with previous literature. Inconclusive results were obtained relating to task performance and attention.","PeriodicalId":32180,"journal":{"name":"Journal of Information and Telecommunication","volume":"6 1","pages":"121 - 140"},"PeriodicalIF":2.7,"publicationDate":"2021-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24751839.2021.1950319","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44879824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-21DOI: 10.1080/24751839.2021.1954752
M. Indoonundon, T. P. Fowdur
ABSTRACT 5G is the next generation of mobile communications networks which will use cutting-edge network technologies to deliver enhanced mobile connectivity. 5G has introduced new requirements for channel coding in its three different service classes which are enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC) and massive machine-type communications (mMTC). eMBB is expected to keep up with consumer’s insatiable demand for high mobile data rates to support data-extensive applications. mMTC will provide connectivity to a massive number of connected devices sending short data packets simultaneously to support applications such as Internet of Things (IoT). Finally, URLLC will ensure reliable low latency connectivity to support mission-critical latency-sensitive applications such as telesurgery. To address these new requirements, several new channel coding schemes are being developed. This review article provides a detailed analysis of the new channel coding challenges set out by 5G. A detailed review of existing and emerging solutions is provided. Moreover, simulation are performed to assess the performances of Low-Density Parity-Check (LDPC) codes and Polar codes used in 5G’s eMBB. Directions for future works and new solutions for 5G channel coding are also discussed.
{"title":"Overview of the challenges and solutions for 5G channel coding schemes","authors":"M. Indoonundon, T. P. Fowdur","doi":"10.1080/24751839.2021.1954752","DOIUrl":"https://doi.org/10.1080/24751839.2021.1954752","url":null,"abstract":"ABSTRACT 5G is the next generation of mobile communications networks which will use cutting-edge network technologies to deliver enhanced mobile connectivity. 5G has introduced new requirements for channel coding in its three different service classes which are enhanced mobile broadband (eMBB), ultra-reliable low latency communications (URLLC) and massive machine-type communications (mMTC). eMBB is expected to keep up with consumer’s insatiable demand for high mobile data rates to support data-extensive applications. mMTC will provide connectivity to a massive number of connected devices sending short data packets simultaneously to support applications such as Internet of Things (IoT). Finally, URLLC will ensure reliable low latency connectivity to support mission-critical latency-sensitive applications such as telesurgery. To address these new requirements, several new channel coding schemes are being developed. This review article provides a detailed analysis of the new channel coding challenges set out by 5G. A detailed review of existing and emerging solutions is provided. Moreover, simulation are performed to assess the performances of Low-Density Parity-Check (LDPC) codes and Polar codes used in 5G’s eMBB. Directions for future works and new solutions for 5G channel coding are also discussed.","PeriodicalId":32180,"journal":{"name":"Journal of Information and Telecommunication","volume":"5 1","pages":"460 - 483"},"PeriodicalIF":2.7,"publicationDate":"2021-07-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24751839.2021.1954752","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42899842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-03DOI: 10.1080/24751839.2020.1866335
L. Bobrowski, Paweł Zabielski
ABSTRACT Understanding large data sets is one of the most important and challenging problems in the modern days. Exploration of genetic data sets composed of high dimensional feature vectors can be treated as a leading example in this context. A better understanding of large, multivariate data sets can be achieved through exploration and extraction of their structure. Collinear patterns can be an important part of a given data set structure. Collinear (flat) pattern exists in a given set of feature vectors when many of these vectors are located on (or near) some plane in the feature space. Discovered flat patterns can reflect various types of interaction in an explored data set. The presented paper compares basis exchange algorithms with learning algorithms in the task of flat patterns extraction.
{"title":"Basis exchange and learning algorithms for extracting collinear patterns","authors":"L. Bobrowski, Paweł Zabielski","doi":"10.1080/24751839.2020.1866335","DOIUrl":"https://doi.org/10.1080/24751839.2020.1866335","url":null,"abstract":"ABSTRACT Understanding large data sets is one of the most important and challenging problems in the modern days. Exploration of genetic data sets composed of high dimensional feature vectors can be treated as a leading example in this context. A better understanding of large, multivariate data sets can be achieved through exploration and extraction of their structure. Collinear patterns can be an important part of a given data set structure. Collinear (flat) pattern exists in a given set of feature vectors when many of these vectors are located on (or near) some plane in the feature space. Discovered flat patterns can reflect various types of interaction in an explored data set. The presented paper compares basis exchange algorithms with learning algorithms in the task of flat patterns extraction.","PeriodicalId":32180,"journal":{"name":"Journal of Information and Telecommunication","volume":"5 1","pages":"334 - 349"},"PeriodicalIF":2.7,"publicationDate":"2021-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24751839.2020.1866335","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42452620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-30DOI: 10.1080/24751839.2021.1946740
Tran-Hieu Nguyen, Anh-Tuan Vu
ABSTRACT Composite Differential Evolution (CoDE) is categorized as a (µ + λ)-Evolutionary Algorithm where each parent produces three trials. Thanks to that, the CoDE algorithm has a strong search capacity. However, the production of many offspring increases the computation cost of fitness evaluation. To overcome this problem, neural networks, a powerful machine learning algorithm, are used as surrogate models for rapidly evaluating the fitness of candidates, thereby speeding up the CoDE algorithm. More specifically, in the first phase, the CoDE algorithm is implemented as usual, but the fitnesses of produced candidates are saved to the database. Once a sufficient amount of data has been collected, a neural network is developed to predict the constraint violation degree of candidates. Offspring produced later will be evaluated using the trained neural network and only the best among them is compared with its parent by exact fitness evaluation. In this way, the number of exact fitness evaluations is significantly reduced. The proposed method is applied for three benchmark problems of 10-bar truss, 25-bar truss, and 72-bar truss. The results show that the proposed method reduces the computation cost by approximately 60%.
{"title":"Speeding up Composite Differential Evolution for structural optimization using neural networks","authors":"Tran-Hieu Nguyen, Anh-Tuan Vu","doi":"10.1080/24751839.2021.1946740","DOIUrl":"https://doi.org/10.1080/24751839.2021.1946740","url":null,"abstract":"ABSTRACT Composite Differential Evolution (CoDE) is categorized as a (µ + λ)-Evolutionary Algorithm where each parent produces three trials. Thanks to that, the CoDE algorithm has a strong search capacity. However, the production of many offspring increases the computation cost of fitness evaluation. To overcome this problem, neural networks, a powerful machine learning algorithm, are used as surrogate models for rapidly evaluating the fitness of candidates, thereby speeding up the CoDE algorithm. More specifically, in the first phase, the CoDE algorithm is implemented as usual, but the fitnesses of produced candidates are saved to the database. Once a sufficient amount of data has been collected, a neural network is developed to predict the constraint violation degree of candidates. Offspring produced later will be evaluated using the trained neural network and only the best among them is compared with its parent by exact fitness evaluation. In this way, the number of exact fitness evaluations is significantly reduced. The proposed method is applied for three benchmark problems of 10-bar truss, 25-bar truss, and 72-bar truss. The results show that the proposed method reduces the computation cost by approximately 60%.","PeriodicalId":32180,"journal":{"name":"Journal of Information and Telecommunication","volume":"6 1","pages":"101 - 120"},"PeriodicalIF":2.7,"publicationDate":"2021-06-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24751839.2021.1946740","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45362587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-25DOI: 10.1080/24751839.2021.1941574
Nhan Tam Dang, V. Nguyen, Hai-Duong Le, Marcin Maleszka, Manh Ha Tran
ABSTRACT The strong growth of communication and storage gives rise to the significantly increasing demand for collecting, storing, and sharing a large amount of data on networks. This is further enhanced by the data-driven market, where everyone wants to access other parties' data. Data is the bedrock of today's technologies and researchers, especially in machine learning and deep learning. The business value of organizations is also massively data-dependent. Recent studies and industry applications can apply analytic techniques for exploiting data, or Internet users can exchange data on social networks or peer-to-peer (P2P) networks. However, sharing secured data is a challenging problem that attracts much attention from researchers. Sharing secure data with a group of users using P2P applications faces the unavailability problem of peer nodes. Thus users cannot certify and download the protected data. This affects a P2P-based application class of sharing and storing online services or customer-to-customer e-commerce applications. This article proposes a solution for sharing secured data on P2P-based applications using blockchain and attribute-based encryption. The attribute-based encryption guarantees sharing keys among a group of users, while blockchain guarantees keys distribution. We have simulated the proposed solution on the mobile peer-to-peer network that provides services for sharing and storing data securely.
{"title":"Sharing secured data on peer-to-peer applications using attribute-based encryption","authors":"Nhan Tam Dang, V. Nguyen, Hai-Duong Le, Marcin Maleszka, Manh Ha Tran","doi":"10.1080/24751839.2021.1941574","DOIUrl":"https://doi.org/10.1080/24751839.2021.1941574","url":null,"abstract":"ABSTRACT The strong growth of communication and storage gives rise to the significantly increasing demand for collecting, storing, and sharing a large amount of data on networks. This is further enhanced by the data-driven market, where everyone wants to access other parties' data. Data is the bedrock of today's technologies and researchers, especially in machine learning and deep learning. The business value of organizations is also massively data-dependent. Recent studies and industry applications can apply analytic techniques for exploiting data, or Internet users can exchange data on social networks or peer-to-peer (P2P) networks. However, sharing secured data is a challenging problem that attracts much attention from researchers. Sharing secure data with a group of users using P2P applications faces the unavailability problem of peer nodes. Thus users cannot certify and download the protected data. This affects a P2P-based application class of sharing and storing online services or customer-to-customer e-commerce applications. This article proposes a solution for sharing secured data on P2P-based applications using blockchain and attribute-based encryption. The attribute-based encryption guarantees sharing keys among a group of users, while blockchain guarantees keys distribution. We have simulated the proposed solution on the mobile peer-to-peer network that provides services for sharing and storing data securely.","PeriodicalId":32180,"journal":{"name":"Journal of Information and Telecommunication","volume":"5 1","pages":"440 - 459"},"PeriodicalIF":2.7,"publicationDate":"2021-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24751839.2021.1941574","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41688527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-14DOI: 10.1080/24751839.2021.1937465
N. M. Hung, Tung Nt, Bay Vo
ABSTRACT Discovering high-utility itemsets from a transaction database is one of the important tasks in High-Utility Itemset Mining (HUIM). The discovered high-utility itemsets (HUIs) must meet a user-defined given minimum utility threshold. Several methods have been proposed to solve the problem efficiently. However, they focused on exploring and discovering the set of HUIs. This research proposes a more generalized approach to mine HUIs using any user-specified correlated measure, named the General Method for Correlated High-utility itemset Mining (GMCHM). This proposed approach has the ability to discover HUIs that are highly correlated, based on the all_confidence and bond measures (and 38 other correlated measures). Evaluations were carried out on the standard datasets for HUIM, such as Accidents, BMS_utility and Connect. The results proved the high effectiveness of GMCHM in terms of running time, memory usage and the number of scanned candidates.
{"title":"A General Method for mining high-Utility itemsets with correlated measures","authors":"N. M. Hung, Tung Nt, Bay Vo","doi":"10.1080/24751839.2021.1937465","DOIUrl":"https://doi.org/10.1080/24751839.2021.1937465","url":null,"abstract":"ABSTRACT Discovering high-utility itemsets from a transaction database is one of the important tasks in High-Utility Itemset Mining (HUIM). The discovered high-utility itemsets (HUIs) must meet a user-defined given minimum utility threshold. Several methods have been proposed to solve the problem efficiently. However, they focused on exploring and discovering the set of HUIs. This research proposes a more generalized approach to mine HUIs using any user-specified correlated measure, named the General Method for Correlated High-utility itemset Mining (GMCHM). This proposed approach has the ability to discover HUIs that are highly correlated, based on the all_confidence and bond measures (and 38 other correlated measures). Evaluations were carried out on the standard datasets for HUIM, such as Accidents, BMS_utility and Connect. The results proved the high effectiveness of GMCHM in terms of running time, memory usage and the number of scanned candidates.","PeriodicalId":32180,"journal":{"name":"Journal of Information and Telecommunication","volume":"5 1","pages":"536 - 549"},"PeriodicalIF":2.7,"publicationDate":"2021-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24751839.2021.1937465","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46775201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-08DOI: 10.1080/24751839.2021.1935644
Van Du Nguyen, Tram Nguyen
ABSTRACT University timetable scheduling, which is a typical problem that all universities around the world have to face every semester, is an NP-hard problem. It is the task of allocating the right timeslots and classrooms for various courses by taking into account predefined constraints. In the current literature, many approaches have been proposed to find feasible timetables. Among others, swarm-based algorithms are promising candidates because of their effectiveness and flexibility. This paper investigates proposing an approach to university timetable scheduling using a recent novel swarm-based algorithm named Spotted Hyena Optimizer (SHO) which is inspired by the hunting behaviour of spotted hyenas. Then, a combination of SA and SHO algorithms also investigated to improve the overall performance of the proposed method. We also illustrate the proposed method on a real-world university timetabling problem in Vietnam. Experimental results have indicated the efficiency of the proposed method in comparison to other competitive metaheuristic algorithm such as PSO algorithm in finding feasible timetables.
{"title":"An SHO-based approach to timetable scheduling: a case study","authors":"Van Du Nguyen, Tram Nguyen","doi":"10.1080/24751839.2021.1935644","DOIUrl":"https://doi.org/10.1080/24751839.2021.1935644","url":null,"abstract":"ABSTRACT University timetable scheduling, which is a typical problem that all universities around the world have to face every semester, is an NP-hard problem. It is the task of allocating the right timeslots and classrooms for various courses by taking into account predefined constraints. In the current literature, many approaches have been proposed to find feasible timetables. Among others, swarm-based algorithms are promising candidates because of their effectiveness and flexibility. This paper investigates proposing an approach to university timetable scheduling using a recent novel swarm-based algorithm named Spotted Hyena Optimizer (SHO) which is inspired by the hunting behaviour of spotted hyenas. Then, a combination of SA and SHO algorithms also investigated to improve the overall performance of the proposed method. We also illustrate the proposed method on a real-world university timetabling problem in Vietnam. Experimental results have indicated the efficiency of the proposed method in comparison to other competitive metaheuristic algorithm such as PSO algorithm in finding feasible timetables.","PeriodicalId":32180,"journal":{"name":"Journal of Information and Telecommunication","volume":"5 1","pages":"421 - 439"},"PeriodicalIF":2.7,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24751839.2021.1935644","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46772205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-10DOI: 10.1080/24751839.2021.1893496
Péter Marjai, P. Lehotay-Kéry, A. Kiss
ABSTRACT In today's rushing world, there's an ever-increasing usage of networking equipment. These devices log their operations; however, there could be errors that result in the restart of the given device. There could be different patterns before different errors. Our main goal is to predict the upcoming error based on the log lines of the actual file. To achieve this, we use document similarity. One of the key concepts of information retrieval is document similarity which is an indicator of how analogous (or different) documents are. In this paper, we are studying the effectiveness of prediction based on cosine similarity, Jaccard similarity, and Euclidean distance of rows before restarts. We use different features like TFIDF, Doc2Vec, LSH, and others in conjunction with these distance measures. Since networking devices produce lots of log files, we use Spark for Big data computing.
{"title":"Document similarity for error prediction","authors":"Péter Marjai, P. Lehotay-Kéry, A. Kiss","doi":"10.1080/24751839.2021.1893496","DOIUrl":"https://doi.org/10.1080/24751839.2021.1893496","url":null,"abstract":"ABSTRACT In today's rushing world, there's an ever-increasing usage of networking equipment. These devices log their operations; however, there could be errors that result in the restart of the given device. There could be different patterns before different errors. Our main goal is to predict the upcoming error based on the log lines of the actual file. To achieve this, we use document similarity. One of the key concepts of information retrieval is document similarity which is an indicator of how analogous (or different) documents are. In this paper, we are studying the effectiveness of prediction based on cosine similarity, Jaccard similarity, and Euclidean distance of rows before restarts. We use different features like TFIDF, Doc2Vec, LSH, and others in conjunction with these distance measures. Since networking devices produce lots of log files, we use Spark for Big data computing.","PeriodicalId":32180,"journal":{"name":"Journal of Information and Telecommunication","volume":"5 1","pages":"407 - 420"},"PeriodicalIF":2.7,"publicationDate":"2021-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24751839.2021.1893496","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49431722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-03-10DOI: 10.1080/24751839.2021.1893495
E. Yılmaz, M. Trocan
ABSTRACT Differential diagnosis of malignant melanoma, which is the cause of more than 75% of deaths amongst skin lesions, is vital for patients. Artificial intelligence-based decision support systems developed for the analysis of medical images are in the solution of such problems. In recent years, various deep learning algorithms have been developed to be used for this purpose. In our previous study, we compared the performances of AlexNet, GoogLeNet and ResNet-50 for the differential diagnosis of benign and malignant melanoma on International Skin Imaging Collaboration: Melanoma Project (ISIC) dataset. In this study, we proposed a CNN model by modifying the GoogLeNet algorithm and we compared the performance of this model with the previous results. For the experiments, we used 19,373 benign and 2197 malignant diagnosed dermoscopy images obtained from this public archive. We compared the performance results according to the eight different performance metrics including polygon area metric (PAM), classification accuracy (CA), sensitivity (SE), specificity (SP), area under curve (AUC), kappa (K), F measure metric (FM) and time complexity (TC) measures. According to the results, our proposed CNN achieved the best classification accuracy with 0.9309 and decreased the time complexity of GoogLeNet from 283 min 50 to 256 min 26 s.
摘要恶性黑色素瘤是导致75%以上皮肤病变死亡的原因,对患者进行鉴别诊断至关重要。为分析医学图像而开发的基于人工智能的决策支持系统正是解决这些问题的方法。近年来,各种深度学习算法已被开发用于此目的。在我们之前的研究中,我们在国际皮肤成像合作组织:黑色素瘤项目(ISIC)数据集上比较了AlexNet、GoogLeNet和ResNet-50在良恶性黑色素瘤鉴别诊断方面的性能。在这项研究中,我们通过修改GoogLeNet算法提出了一个CNN模型,并将该模型的性能与之前的结果进行了比较。在实验中,我们使用了从这个公共档案中获得的19373张良性和2197张恶性诊断的皮肤镜图像。我们根据八种不同的性能指标比较了性能结果,包括多边形面积指标(PAM)、分类准确度指标(CA)、敏感性指标(SE)、特异性指标(SP)、曲线下面积指标(AUC)、kappa指标(K)、F度量指标(FM)和时间复杂性指标(TC)。根据结果,我们提出的CNN获得了最佳的分类精度,为0.9309,并将GoogLeNet的时间复杂度从283降低 最小50至256 最小26 s
{"title":"A modified version of GoogLeNet for melanoma diagnosis","authors":"E. Yılmaz, M. Trocan","doi":"10.1080/24751839.2021.1893495","DOIUrl":"https://doi.org/10.1080/24751839.2021.1893495","url":null,"abstract":"ABSTRACT Differential diagnosis of malignant melanoma, which is the cause of more than 75% of deaths amongst skin lesions, is vital for patients. Artificial intelligence-based decision support systems developed for the analysis of medical images are in the solution of such problems. In recent years, various deep learning algorithms have been developed to be used for this purpose. In our previous study, we compared the performances of AlexNet, GoogLeNet and ResNet-50 for the differential diagnosis of benign and malignant melanoma on International Skin Imaging Collaboration: Melanoma Project (ISIC) dataset. In this study, we proposed a CNN model by modifying the GoogLeNet algorithm and we compared the performance of this model with the previous results. For the experiments, we used 19,373 benign and 2197 malignant diagnosed dermoscopy images obtained from this public archive. We compared the performance results according to the eight different performance metrics including polygon area metric (PAM), classification accuracy (CA), sensitivity (SE), specificity (SP), area under curve (AUC), kappa (K), F measure metric (FM) and time complexity (TC) measures. According to the results, our proposed CNN achieved the best classification accuracy with 0.9309 and decreased the time complexity of GoogLeNet from 283 min 50 to 256 min 26 s.","PeriodicalId":32180,"journal":{"name":"Journal of Information and Telecommunication","volume":"5 1","pages":"395 - 405"},"PeriodicalIF":2.7,"publicationDate":"2021-03-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24751839.2021.1893495","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43177130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-01DOI: 10.1080/24751839.2021.1874252
L. Nemes, A. Kiss
ABSTRACT The prediction and speculation about the values of the stock market especially the values of the worldwide companies are a really interesting and attractive topic. In this article, we cover the topic of the stock value changes and predictions of the stock values using fresh scraped economic news about the companies. We are focussing on the headlines of economic news. We use numerous different tools to the sentiment analysis of the headlines. We consider BERT as the baseline and compare the results with three other tools, VADER, TextBlob, and a Recurrent Neural Network, and compare the sentiment results to the stock changes of the same period. The BERT and RNN were much more accurate, these tools were able to determine the emotional values without neutral sections, in contrast to the other two tools. Comparing these results with the movement of stock market values in the same time periods, we can establish the moment of the change occurred in the stock values with sentiment analysis of economic news headlines. Also we discovered a significant difference between the different models in terms of the effect of emotional values on the change in the value of the stock market by the correlation matrices.
{"title":"Prediction of stock values changes using sentiment analysis of stock news headlines","authors":"L. Nemes, A. Kiss","doi":"10.1080/24751839.2021.1874252","DOIUrl":"https://doi.org/10.1080/24751839.2021.1874252","url":null,"abstract":"ABSTRACT The prediction and speculation about the values of the stock market especially the values of the worldwide companies are a really interesting and attractive topic. In this article, we cover the topic of the stock value changes and predictions of the stock values using fresh scraped economic news about the companies. We are focussing on the headlines of economic news. We use numerous different tools to the sentiment analysis of the headlines. We consider BERT as the baseline and compare the results with three other tools, VADER, TextBlob, and a Recurrent Neural Network, and compare the sentiment results to the stock changes of the same period. The BERT and RNN were much more accurate, these tools were able to determine the emotional values without neutral sections, in contrast to the other two tools. Comparing these results with the movement of stock market values in the same time periods, we can establish the moment of the change occurred in the stock values with sentiment analysis of economic news headlines. Also we discovered a significant difference between the different models in terms of the effect of emotional values on the change in the value of the stock market by the correlation matrices.","PeriodicalId":32180,"journal":{"name":"Journal of Information and Telecommunication","volume":"5 1","pages":"375 - 394"},"PeriodicalIF":2.7,"publicationDate":"2021-02-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/24751839.2021.1874252","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41788841","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}