The basic tool in the analytic hierarchy process (AHP) is the complete judgment matrix. To address the weakness of the AHP in determining weight in the comprehensive evaluation system, the particle swarm optimization (PSO)-AHP model proposed in this paper is based on the PSO in the meta-heuristic algorithm. The model was used to solve the indicator weights in the evaluation system of AI education in primary and secondary schools in Fujian Province and was compared with the genetic algorithm and war strategy optimization algorithm. From the comparison results, the PSO-AHP optimization is more effective among the three algorithms, and the indicator consistency can be improved by about 30%. They are both effective in solving the problem that once the judgment matrix is given in the AHP, the weights and indicator consistency cannot be improved. Finally, the results were tested by Friedman statistics to prove the viability of the proposed algorithm.
{"title":"The Construction and Optimization of an AI Education Evaluation Indicator Based on Intelligent Algorithms","authors":"Yuansheng Zeng, Xing Xu","doi":"10.4018/ijcini.315275","DOIUrl":"https://doi.org/10.4018/ijcini.315275","url":null,"abstract":"The basic tool in the analytic hierarchy process (AHP) is the complete judgment matrix. To address the weakness of the AHP in determining weight in the comprehensive evaluation system, the particle swarm optimization (PSO)-AHP model proposed in this paper is based on the PSO in the meta-heuristic algorithm. The model was used to solve the indicator weights in the evaluation system of AI education in primary and secondary schools in Fujian Province and was compared with the genetic algorithm and war strategy optimization algorithm. From the comparison results, the PSO-AHP optimization is more effective among the three algorithms, and the indicator consistency can be improved by about 30%. They are both effective in solving the problem that once the judgment matrix is given in the AHP, the weights and indicator consistency cannot be improved. Finally, the results were tested by Friedman statistics to prove the viability of the proposed algorithm.","PeriodicalId":43637,"journal":{"name":"International Journal of Cognitive Informatics and Natural Intelligence","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42515587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The yacht industry is one of the leading industries used to guide residents’ increase in consumption. This study analyzes the evolving spatial pattern of yacht clubs in the United States from 1900-2017, aiming to explore the developmental trajectory of yacht clubs in the United States. This study finds that: 1) Yacht clubs in the United States clustered aggregately and unevenly. The concentration of yacht clubs ranges from the northeastern part of the United States to the western and southern regions. 2) The driving factors influencing the development of yacht clubs in the United States changed along with time. The state ship and boat building industry was the main driving factors in phase I (before 1900). The state steel industry was the main driver in phase II (1900-1950). In phase III (1950-2000), state tourism GDP became the main driver, and in phase IV (2000-2017), state GDP and state ocean tourism and recreation GDP became the main factors. This study enriches the literature in the area of yacht tourism in terms of understanding the temporal-spatial pattern of yacht clubs.
{"title":"Developmental Trajectory of the American Yacht Clubs: Using Temporal-Spatial Analysis and Regression Model","authors":"Wanxin Chen, Xiao Chen","doi":"10.4018/ijcini.301205","DOIUrl":"https://doi.org/10.4018/ijcini.301205","url":null,"abstract":"The yacht industry is one of the leading industries used to guide residents’ increase in consumption. This study analyzes the evolving spatial pattern of yacht clubs in the United States from 1900-2017, aiming to explore the developmental trajectory of yacht clubs in the United States. This study finds that: 1) Yacht clubs in the United States clustered aggregately and unevenly. The concentration of yacht clubs ranges from the northeastern part of the United States to the western and southern regions. 2) The driving factors influencing the development of yacht clubs in the United States changed along with time. The state ship and boat building industry was the main driving factors in phase I (before 1900). The state steel industry was the main driver in phase II (1900-1950). In phase III (1950-2000), state tourism GDP became the main driver, and in phase IV (2000-2017), state GDP and state ocean tourism and recreation GDP became the main factors. This study enriches the literature in the area of yacht tourism in terms of understanding the temporal-spatial pattern of yacht clubs.","PeriodicalId":43637,"journal":{"name":"International Journal of Cognitive Informatics and Natural Intelligence","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80279950","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To improve the accuracy of image classification, a kind of improved model is proposed. The shortcut is added to GoogLeNet inception v1 and several other ways of shortcut are given, and they are GRSN1_2, GRSN1_3, GRSN1_4. Among them, the information of the input layer is directly output to each subsequent layer in the form of shortcut. The new improved model has the advantages of multi-size and small convolution kernel in the same layer in the network and the advantages of shortcut to reduce information loss. Meanwhile, as the number of inception blocks increases, the number of channels is increased to deepen the extraction of information. The GRSN, GRSN1_2, GRSN1_3, GRSN1_4, GoogLeNet, and ResNet models were compared on cifar10, cifar100, and mnist datasets. The experimental results show that the proposed model has 3.07% improved to ResNet on data set cifar10, 2.08% on data set cifar100, 17.69% improved to GoogLeNet on data set cifar10, 28.47% on data set cifar100.
{"title":"Improved Model Based on GoogLeNet and Residual Neural Network ResNet","authors":"Xuehua Huang","doi":"10.4018/ijcini.313442","DOIUrl":"https://doi.org/10.4018/ijcini.313442","url":null,"abstract":"To improve the accuracy of image classification, a kind of improved model is proposed. The shortcut is added to GoogLeNet inception v1 and several other ways of shortcut are given, and they are GRSN1_2, GRSN1_3, GRSN1_4. Among them, the information of the input layer is directly output to each subsequent layer in the form of shortcut. The new improved model has the advantages of multi-size and small convolution kernel in the same layer in the network and the advantages of shortcut to reduce information loss. Meanwhile, as the number of inception blocks increases, the number of channels is increased to deepen the extraction of information. The GRSN, GRSN1_2, GRSN1_3, GRSN1_4, GoogLeNet, and ResNet models were compared on cifar10, cifar100, and mnist datasets. The experimental results show that the proposed model has 3.07% improved to ResNet on data set cifar10, 2.08% on data set cifar100, 17.69% improved to GoogLeNet on data set cifar10, 28.47% on data set cifar100.","PeriodicalId":43637,"journal":{"name":"International Journal of Cognitive Informatics and Natural Intelligence","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"70451802","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Meetings are one of the most common collaboration formats for complex problem-solving (CPS). This research aims to formulate cognitive-oriented guidelines for productive synchronous CPS discussions. The study proposes a method to analyze the cognitive process and identifies the cognitive process associated with better CPS discussions. A conversation-analysis method was developed. Two indicators—source–outcome retrieval ratio and count of overlapped solution utterances—were proposed to evaluate the CPS discussion’s efficiency and effectiveness. Sixteen experimental CPS discussions were analyzed using this method. Correlation coefficients were applied to ascertain the cognitive features in CPS discussions with different levels of effectiveness and confirmed the applicability and reliability of the proposed methods. The results revealed that a good CPS discussion includes a regular progress summary, discussion conclusion, and high utilization of cognitive sources.
{"title":"An Empirical Investigation of the Underlying Cognitive Process in Complex Problem Solving: A Proposal of Problem-Solving Discussion Performance Evaluation Methods","authors":"Yingting Chen, T. Kanno, K. Furuta","doi":"10.4018/ijcini.301204","DOIUrl":"https://doi.org/10.4018/ijcini.301204","url":null,"abstract":"Meetings are one of the most common collaboration formats for complex problem-solving (CPS). This research aims to formulate cognitive-oriented guidelines for productive synchronous CPS discussions. The study proposes a method to analyze the cognitive process and identifies the cognitive process associated with better CPS discussions. A conversation-analysis method was developed. Two indicators—source–outcome retrieval ratio and count of overlapped solution utterances—were proposed to evaluate the CPS discussion’s efficiency and effectiveness. Sixteen experimental CPS discussions were analyzed using this method. Correlation coefficients were applied to ascertain the cognitive features in CPS discussions with different levels of effectiveness and confirmed the applicability and reliability of the proposed methods. The results revealed that a good CPS discussion includes a regular progress summary, discussion conclusion, and high utilization of cognitive sources.","PeriodicalId":43637,"journal":{"name":"International Journal of Cognitive Informatics and Natural Intelligence","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88881520","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Computational models of emotion (CMEs) are software systems designed to emulate specific aspects of the human emotions process. The underlying components of CMEs interact with cognitive components of cognitive agent architectures to produce realistic behaviors in intelligent agents. However, in contemporary CMEs, the interaction between affective and cognitive components occurs in ad-hoc manner, which leads to difficulties when new affective or cognitive components should be added in the CME. This paper presents a framework that facilitates taking into account in CMEs the cognitive information generated by cognitive components implemented in cognitive agent architectures. The framework is designed to allow researchers define how cognitive information biases the internal workings of affective components. This framework is inspired in software interoperability practices to enable communication and interpretation of cognitive information and standardize the cognitive-affective communication process by ensuring semantic communication channels used to modulate affective mechanisms of CMEs
{"title":"An Interoperable Framework for Computational Models of Emotion","authors":"Enrique Osuna, Sergio Castellanos, Jonathan-Hernando Rosales, Luis-Felipe Rodríguez","doi":"10.4018/ijcini.296257","DOIUrl":"https://doi.org/10.4018/ijcini.296257","url":null,"abstract":"Computational models of emotion (CMEs) are software systems designed to emulate specific aspects of the human emotions process. The underlying components of CMEs interact with cognitive components of cognitive agent architectures to produce realistic behaviors in intelligent agents. However, in contemporary CMEs, the interaction between affective and cognitive components occurs in ad-hoc manner, which leads to difficulties when new affective or cognitive components should be added in the CME. This paper presents a framework that facilitates taking into account in CMEs the cognitive information generated by cognitive components implemented in cognitive agent architectures. The framework is designed to allow researchers define how cognitive information biases the internal workings of affective components. This framework is inspired in software interoperability practices to enable communication and interpretation of cognitive information and standardize the cognitive-affective communication process by ensuring semantic communication channels used to modulate affective mechanisms of CMEs","PeriodicalId":43637,"journal":{"name":"International Journal of Cognitive Informatics and Natural Intelligence","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78523279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial Intelligence is becoming more attractive to resolve nontrivial problems including the well known real time scheduling (RTS) problem for Embedded Systems (ES). The latter is considered as a hard multi-objective optimization problem because it must optimize at the same time three key conflictual objectives that are tasks deadlines guarantee, energy consumption reduction and reliability enhancement. In this paper, we firstly present the necessary background to well understand the problematic of RTS in the context of ES, then we present our enriched taxonomies for real time, energy and faults tolerance aware scheduling algorithms for ES. After that, we survey the most pertinent existing works of literature targeting the application of AI methods to resolve the RTS problem for ES notably Constraint Programming, Game theory, Machine learning, Fuzzy logic, Artificial Immune Systems, Cellular Automata, Evolutionary algorithms, Multi-agent Systems and Swarm Intelligence. We end this survey by a discussion putting the light on the main challenges and the future directions.
{"title":"AI-Based Methods to Resolve Real-Time Scheduling for Embedded Systems: A Review","authors":"Fateh Boutekkouk","doi":"10.4018/ijcini.290308","DOIUrl":"https://doi.org/10.4018/ijcini.290308","url":null,"abstract":"Artificial Intelligence is becoming more attractive to resolve nontrivial problems including the well known real time scheduling (RTS) problem for Embedded Systems (ES). The latter is considered as a hard multi-objective optimization problem because it must optimize at the same time three key conflictual objectives that are tasks deadlines guarantee, energy consumption reduction and reliability enhancement. In this paper, we firstly present the necessary background to well understand the problematic of RTS in the context of ES, then we present our enriched taxonomies for real time, energy and faults tolerance aware scheduling algorithms for ES. After that, we survey the most pertinent existing works of literature targeting the application of AI methods to resolve the RTS problem for ES notably Constraint Programming, Game theory, Machine learning, Fuzzy logic, Artificial Immune Systems, Cellular Automata, Evolutionary algorithms, Multi-agent Systems and Swarm Intelligence. We end this survey by a discussion putting the light on the main challenges and the future directions.","PeriodicalId":43637,"journal":{"name":"International Journal of Cognitive Informatics and Natural Intelligence","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74135758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.4018/IJCINI.20211001.OA36
Baranidharan Balakrishnan, C. Kumar
Cardio vascular diseases (CVD) are the major reason for the death of the majority of the people in the world. Earlier diagnosis of disease will reduce the mortality rate. Machine learning (ML) algorithms are giving promising results in the disease diagnosis, and they are now widely accepted by medical experts as their clinical decision support system. In this work, the most popular ML models are investigated and compared with one other for heart disease prediction based on various metrics. The base classifiers such as support vector machine (SVM), logistic regression, naïve Bayes, decision tree, k-nearest neighbour are used for predicting heart disease. In this paper, bagging and boosting techniques are applied over these individual classifiers to improve the performance of the system. With the Cleveland and Statlog datasets, naive Bayes as the individual classifier gives the maximum accuracy of 85.13%and 84.81%, respectively. Bagging technique improves the accuracy of the decision tree, which is identified as a weak classifier by 7%, and it is a significant improvement in identifying CVD. that Bayes, Support Vector Machine and Logistic are strong classifiers more than 80% accuracy and Decision Tree and K Nearest Neighbours as weak classifiers. Bagging and boosting techniques the performance of weak classifiers Decision Tree and K Nearest Neighbours. Bagging technique improved the accuracy of the decision tree algorithm 7.77% maximum for Statlog dataset. In future, feature selection is to be applied to find out the most relevant features of the data set and applying over the ensemble models over it will give better-improved accuracy.
{"title":"A Comprehensive Performance Analysis of Various Classifier Models for Coronary Artery Disease Prediction","authors":"Baranidharan Balakrishnan, C. Kumar","doi":"10.4018/IJCINI.20211001.OA36","DOIUrl":"https://doi.org/10.4018/IJCINI.20211001.OA36","url":null,"abstract":"Cardio vascular diseases (CVD) are the major reason for the death of the majority of the people in the world. Earlier diagnosis of disease will reduce the mortality rate. Machine learning (ML) algorithms are giving promising results in the disease diagnosis, and they are now widely accepted by medical experts as their clinical decision support system. In this work, the most popular ML models are investigated and compared with one other for heart disease prediction based on various metrics. The base classifiers such as support vector machine (SVM), logistic regression, naïve Bayes, decision tree, k-nearest neighbour are used for predicting heart disease. In this paper, bagging and boosting techniques are applied over these individual classifiers to improve the performance of the system. With the Cleveland and Statlog datasets, naive Bayes as the individual classifier gives the maximum accuracy of 85.13%and 84.81%, respectively. Bagging technique improves the accuracy of the decision tree, which is identified as a weak classifier by 7%, and it is a significant improvement in identifying CVD. that Bayes, Support Vector Machine and Logistic are strong classifiers more than 80% accuracy and Decision Tree and K Nearest Neighbours as weak classifiers. Bagging and boosting techniques the performance of weak classifiers Decision Tree and K Nearest Neighbours. Bagging technique improved the accuracy of the decision tree algorithm 7.77% maximum for Statlog dataset. In future, feature selection is to be applied to find out the most relevant features of the data set and applying over the ensemble models over it will give better-improved accuracy.","PeriodicalId":43637,"journal":{"name":"International Journal of Cognitive Informatics and Natural Intelligence","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80090038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.4018/IJCINI.20211001.OA33
H Hadjadj, H. Sayoud
Dealing with imbalanced data represents a great challenge in data mining as well as in machine learning task. In this investigation, the authors are interested in the problem of class imbalance in authorship attribution (AA) task, with specific application on Arabic text data. This article proposes a new hybrid approach based on principal components analysis (PCA) and synthetic minority over-sampling technique (SMOTE), which considerably improve the performances of authorship attribution on imbalanced data. The used dataset contains seven Arabic books written by seven different scholars, which are segmented into text segments of the same size, with an average length of 2,900 words per text. The obtained results of the experiments show that the proposed approach using the SMO-SVM classifier presents high performance in terms of authorship attribution accuracy (100%), especially with starting character-bigrams. In addition, the proposed method appears quite interesting by improving the AA performances in imbalanced datasets, mainly with function words.
{"title":"Arabic Authorship Attribution Using Synthetic Minority Over-Sampling Technique and Principal Components Analysis for Imbalanced Documents","authors":"H Hadjadj, H. Sayoud","doi":"10.4018/IJCINI.20211001.OA33","DOIUrl":"https://doi.org/10.4018/IJCINI.20211001.OA33","url":null,"abstract":"Dealing with imbalanced data represents a great challenge in data mining as well as in machine learning task. In this investigation, the authors are interested in the problem of class imbalance in authorship attribution (AA) task, with specific application on Arabic text data. This article proposes a new hybrid approach based on principal components analysis (PCA) and synthetic minority over-sampling technique (SMOTE), which considerably improve the performances of authorship attribution on imbalanced data. The used dataset contains seven Arabic books written by seven different scholars, which are segmented into text segments of the same size, with an average length of 2,900 words per text. The obtained results of the experiments show that the proposed approach using the SMO-SVM classifier presents high performance in terms of authorship attribution accuracy (100%), especially with starting character-bigrams. In addition, the proposed method appears quite interesting by improving the AA performances in imbalanced datasets, mainly with function words.","PeriodicalId":43637,"journal":{"name":"International Journal of Cognitive Informatics and Natural Intelligence","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77193876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.4018/IJCINI.20211001.OA24
Jun Peng, Shangzhu Jin, Shaoning Pang, Du Zhang, Lixiao Feng, Zuojin Li, Yingxu Wang
For a security system built on symmetric-key cryptography algorithms, the substitution box (S-box) plays a crucial role to resist cryptanalysis. This article incorporates quantum chaos and PWLCM chaotic map into a new method of S-box design. The secret key is transformed to generate a sextuple system parameter, which is involved in the generation process of chaotic sequences of two chaotic systems. The output of one chaotic system will disturb the parameters of another chaotic system in order to improve the complexity of encryption sequence. S-box is obtained by XOR operation of the output of two chaotic systems. Over the obtained 500 key-dependent S-boxes, the authors test the S-box cryptographical properties on bijection, nonlinearity, SAC, BIC, differential approximation probability, respectively. Performance comparison of proposed S-box with those chaos-based one in the literature has been made. The results show that the cryptographic characteristics of proposed S-box has met the design objectives and can be applied to data encryption, user authentication and system access control.
{"title":"S-Box Construction Method Based on the Combination of Quantum Chaos and PWLCM Chaotic Map","authors":"Jun Peng, Shangzhu Jin, Shaoning Pang, Du Zhang, Lixiao Feng, Zuojin Li, Yingxu Wang","doi":"10.4018/IJCINI.20211001.OA24","DOIUrl":"https://doi.org/10.4018/IJCINI.20211001.OA24","url":null,"abstract":"For a security system built on symmetric-key cryptography algorithms, the substitution box (S-box) plays a crucial role to resist cryptanalysis. This article incorporates quantum chaos and PWLCM chaotic map into a new method of S-box design. The secret key is transformed to generate a sextuple system parameter, which is involved in the generation process of chaotic sequences of two chaotic systems. The output of one chaotic system will disturb the parameters of another chaotic system in order to improve the complexity of encryption sequence. S-box is obtained by XOR operation of the output of two chaotic systems. Over the obtained 500 key-dependent S-boxes, the authors test the S-box cryptographical properties on bijection, nonlinearity, SAC, BIC, differential approximation probability, respectively. Performance comparison of proposed S-box with those chaos-based one in the literature has been made. The results show that the cryptographic characteristics of proposed S-box has met the design objectives and can be applied to data encryption, user authentication and system access control.","PeriodicalId":43637,"journal":{"name":"International Journal of Cognitive Informatics and Natural Intelligence","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86833717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-10-01DOI: 10.4018/ijcini.20211001.oa16
Wanzhi Wen, Shiqiang Wang, Bingqing Ye, XingYu Zhu, Yitao Hu, Xiaohong Lu, Bin Zhang
Improving software development efficiency based on existing APIs is one of the hot researches in software engineering. Understanding and learning so many APIs in large software libraries is not easy and software developers prefer to provide only requirements descriptions to get the right API. In order to solve this problem, this paper proposes an API recommendation method based on WII-WMD, an improved similarity calculation algorithm. This method firstly structures the text, and then fully mines the semantic information in the text. Finally, it calculates the similarity between the user's query problem and the information described in the API document. The experiment results show that the API recommendation based on WII-WMD can improve the efficiency of the API recommendation system.
{"title":"API Recommendation Based on WII-WMD","authors":"Wanzhi Wen, Shiqiang Wang, Bingqing Ye, XingYu Zhu, Yitao Hu, Xiaohong Lu, Bin Zhang","doi":"10.4018/ijcini.20211001.oa16","DOIUrl":"https://doi.org/10.4018/ijcini.20211001.oa16","url":null,"abstract":"Improving software development efficiency based on existing APIs is one of the hot researches in software engineering. Understanding and learning so many APIs in large software libraries is not easy and software developers prefer to provide only requirements descriptions to get the right API. In order to solve this problem, this paper proposes an API recommendation method based on WII-WMD, an improved similarity calculation algorithm. This method firstly structures the text, and then fully mines the semantic information in the text. Finally, it calculates the similarity between the user's query problem and the information described in the API document. The experiment results show that the API recommendation based on WII-WMD can improve the efficiency of the API recommendation system.","PeriodicalId":43637,"journal":{"name":"International Journal of Cognitive Informatics and Natural Intelligence","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87037836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}