Aiming at the defects of traditional full-text retrieval models in dealing with mathematical expressions, which are special objects different from ordinary texts, a multimodal retrieval and ranking method for scientific documents based on hesitant fuzzy sets (HFS) and XLNet is proposed. This method integrates multimodal information, such as mathematical expression images and context text, as keywords to realize the retrieval of scientific documents. In the image modal, the images of mathematical expressions are recognized, and the hesitancy fuzzy set theory is introduced to calculate the hesitancy fuzzy similarity between mathematical query expressions and the mathematical expressions in candidate scientific documents. Meanwhile, in the text mode, XLNet is used to generate word vectors of the mathematical expression context to obtain the similarity between the query text and the mathematical expression context of the candidate scientific documents. Finally, the multimodal evaluation is integrated, and the hesitation fuzzy set is constructed at the document level to obtain the final scores of the scientific documents and corresponding ranked output. The experimental results show that the recall and precision of this method are 0.774 and 0.663 on the NTCIR dataset, respectively, and the average normalized discounted cumulative gain (NDCG) value of the top-10 ranking results is 0.880 on the Chinese scientific document (CSD) dataset.
{"title":"A Multimodal Retrieval and Ranking Method for Scientific Documents Based on HFS and XLNet","authors":"Meichao Yan, Yuzhuo Wen, Qingxuan Shi, Xuedong Tian","doi":"10.1155/2022/5373531","DOIUrl":"https://doi.org/10.1155/2022/5373531","url":null,"abstract":"Aiming at the defects of traditional full-text retrieval models in dealing with mathematical expressions, which are special objects different from ordinary texts, a multimodal retrieval and ranking method for scientific documents based on hesitant fuzzy sets (HFS) and XLNet is proposed. This method integrates multimodal information, such as mathematical expression images and context text, as keywords to realize the retrieval of scientific documents. In the image modal, the images of mathematical expressions are recognized, and the hesitancy fuzzy set theory is introduced to calculate the hesitancy fuzzy similarity between mathematical query expressions and the mathematical expressions in candidate scientific documents. Meanwhile, in the text mode, XLNet is used to generate word vectors of the mathematical expression context to obtain the similarity between the query text and the mathematical expression context of the candidate scientific documents. Finally, the multimodal evaluation is integrated, and the hesitation fuzzy set is constructed at the document level to obtain the final scores of the scientific documents and corresponding ranked output. The experimental results show that the recall and precision of this method are 0.774 and 0.663 on the NTCIR dataset, respectively, and the average normalized discounted cumulative gain (NDCG) value of the top-10 ranking results is 0.880 on the Chinese scientific document (CSD) dataset.","PeriodicalId":21628,"journal":{"name":"Sci. Program.","volume":"1 1","pages":"5373531:1-5373531:11"},"PeriodicalIF":0.0,"publicationDate":"2022-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78057035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In safety-critical fields, architectural languages such as AADL (Architecture Analysis and Design Language) have been playing an important role, and the analysis of the languages and systems designed by them is a challenging research topic. At present, a formal method has become one of the main practices in software engineering for strict analysis, and it has been applied on the tools of formalization and analysis. The formal method can be used to find and resolve the problems early by describing the system with precise semantics and validating the system model. This article studies the comprehensive formal specification and verification of AADL with Behavior annex by the formal method. The presentation of this specification and semantics is the aim of this article, and the work is illustrated with an ARINC653 model case study in Isabelle/HOL.
在安全关键领域,像AADL (Architecture Analysis and Design Language,架构分析与设计语言)这样的架构语言一直扮演着重要的角色,对它们设计的语言和系统进行分析是一个具有挑战性的研究课题。目前,形式化方法已成为软件工程中严格分析的主要实践之一,并在形式化和分析工具上得到了应用。形式化方法可以通过精确的语义描述系统和验证系统模型,尽早发现和解决问题。本文采用形式化方法研究了带有行为附件的AADL的全面形式化规范和验证。本文的目的是介绍该规范和语义,并通过Isabelle/HOL中的ARINC653模型案例研究来说明这项工作。
{"title":"A Comprehensive Formalization of AADL with Behavior Annex","authors":"Yuen-Lin Tan, Yongwang Zhao, Dian-fu Ma, Xuejun Zhang","doi":"10.1155/2022/2079880","DOIUrl":"https://doi.org/10.1155/2022/2079880","url":null,"abstract":"In safety-critical fields, architectural languages such as AADL (Architecture Analysis and Design Language) have been playing an important role, and the analysis of the languages and systems designed by them is a challenging research topic. At present, a formal method has become one of the main practices in software engineering for strict analysis, and it has been applied on the tools of formalization and analysis. The formal method can be used to find and resolve the problems early by describing the system with precise semantics and validating the system model. This article studies the comprehensive formal specification and verification of AADL with Behavior annex by the formal method. The presentation of this specification and semantics is the aim of this article, and the work is illustrated with an ARINC653 model case study in Isabelle/HOL.","PeriodicalId":21628,"journal":{"name":"Sci. Program.","volume":"24 1","pages":"2079880:1-2079880:26"},"PeriodicalIF":0.0,"publicationDate":"2022-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76859346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Ghaderzadeh, Azamossadat Hosseini, F. Asadi, H. Abolghasemi, D. Bashash, Arash Roshanpoor
Introduction. Acute lymphoblastic leukemia (ALL) is the most common type of leukemia, a deadly white blood cell disease that impacts the human bone marrow. ALL detection in its early stages has always been riddled with complexity and difficulty. Peripheral blood smear (PBS) examination, a common method applied at the outset of ALL diagnosis, is a time-consuming and tedious process that largely depends on the specialist’s experience. Materials and Methods. Herein, a fast, efficient, and comprehensive model based on deep learning (DL) was proposed by implementing eight well-known convolutional neural network (CNN) models for feature extraction on all images and classification of B-ALL lymphoblast and normal cells. After evaluating their performance, four best-performing CNN models were selected to compose an ensemble classifier by combining each classifier’s pretrained model capabilities. Results. Due to the close similarity of the nuclei of cancerous and normal cells, CNN models alone had low sensitivity and poor performance in diagnosing these two classes. The proposed model based on the majority voting technique was adopted to combine the CNN models. The resulting model achieved a sensitivity of 99.4, specificity of 96.7, AUC of 98.3, and accuracy of 98.5. Conclusion. In classifying cancerous blood cells from normal cells, the proposed method can achieve high accuracy without the operator’s intervention in cell feature determination. It can thus be recommended as an extraordinary tool for the analysis of blood samples in digital laboratory equipment to assist laboratory specialists.
{"title":"Automated Detection Model in Classification of B-Lymphoblast Cells from Normal B-Lymphoid Precursors in Blood Smear Microscopic Images Based on the Majority Voting Technique","authors":"M. Ghaderzadeh, Azamossadat Hosseini, F. Asadi, H. Abolghasemi, D. Bashash, Arash Roshanpoor","doi":"10.1155/2022/4801671","DOIUrl":"https://doi.org/10.1155/2022/4801671","url":null,"abstract":"Introduction. Acute lymphoblastic leukemia (ALL) is the most common type of leukemia, a deadly white blood cell disease that impacts the human bone marrow. ALL detection in its early stages has always been riddled with complexity and difficulty. Peripheral blood smear (PBS) examination, a common method applied at the outset of ALL diagnosis, is a time-consuming and tedious process that largely depends on the specialist’s experience. Materials and Methods. Herein, a fast, efficient, and comprehensive model based on deep learning (DL) was proposed by implementing eight well-known convolutional neural network (CNN) models for feature extraction on all images and classification of B-ALL lymphoblast and normal cells. After evaluating their performance, four best-performing CNN models were selected to compose an ensemble classifier by combining each classifier’s pretrained model capabilities. Results. Due to the close similarity of the nuclei of cancerous and normal cells, CNN models alone had low sensitivity and poor performance in diagnosing these two classes. The proposed model based on the majority voting technique was adopted to combine the CNN models. The resulting model achieved a sensitivity of 99.4, specificity of 96.7, AUC of 98.3, and accuracy of 98.5. Conclusion. In classifying cancerous blood cells from normal cells, the proposed method can achieve high accuracy without the operator’s intervention in cell feature determination. It can thus be recommended as an extraordinary tool for the analysis of blood samples in digital laboratory equipment to assist laboratory specialists.","PeriodicalId":21628,"journal":{"name":"Sci. Program.","volume":"11 1","pages":"4801671:1-4801671:8"},"PeriodicalIF":0.0,"publicationDate":"2022-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86884965","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the process of sports, athletes often have aggressive behaviors because of their emotional fluctuations. This violent sports behavior has caused many serious bad effects. In order to reduce and solve this kind of public emergencies, this paper aims to create a swarm intelligence model for predicting people's sports attack behavior, takes the swarm intelligence algorithm as the core technology optimization model, and uses the Internet of Things and other technologies to recognize emotions on physiological signals, predict, and intervene sports attack behavior. The results show the following: (1) After the 50-fold cross-validation method, the results of emotion recognition are good, and the accuracy is high. Compared with other physiological electrical signals, EDA has the worst classification performance. (2) The recognition accuracy of the two methods using multimodal fusion is improved greatly, and the result after comparison is obviously better than that of single mode. (3) Anxiety, anger, surprise, and sadness are the most detected emotions in the model, and the recognition accuracy is higher than 80%. Sports intervention should be carried out in time to calm athletes' emotions. After the experiment, our model runs successfully and performs well, which can be optimized and tested in the next step.
{"title":"Prediction of Sports Aggression Behavior and Analysis of Sports Intervention Based on Swarm Intelligence Model","authors":"Huijian Deng, Shijian Cao, Jingen Tang","doi":"10.1155/2022/2479939","DOIUrl":"https://doi.org/10.1155/2022/2479939","url":null,"abstract":"In the process of sports, athletes often have aggressive behaviors because of their emotional fluctuations. This violent sports behavior has caused many serious bad effects. In order to reduce and solve this kind of public emergencies, this paper aims to create a swarm intelligence model for predicting people's sports attack behavior, takes the swarm intelligence algorithm as the core technology optimization model, and uses the Internet of Things and other technologies to recognize emotions on physiological signals, predict, and intervene sports attack behavior. The results show the following: (1) After the 50-fold cross-validation method, the results of emotion recognition are good, and the accuracy is high. Compared with other physiological electrical signals, EDA has the worst classification performance. (2) The recognition accuracy of the two methods using multimodal fusion is improved greatly, and the result after comparison is obviously better than that of single mode. (3) Anxiety, anger, surprise, and sadness are the most detected emotions in the model, and the recognition accuracy is higher than 80%. Sports intervention should be carried out in time to calm athletes' emotions. After the experiment, our model runs successfully and performs well, which can be optimized and tested in the next step.","PeriodicalId":21628,"journal":{"name":"Sci. Program.","volume":"183 1","pages":"2479939:1-2479939:11"},"PeriodicalIF":0.0,"publicationDate":"2022-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76046864","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In order to explore the impact of the Internet of Things technology on economic market fluctuations and the analysis effect of the Internet of Things technology on economic market fluctuations, this paper uses the Internet of Things algorithm to improve the economic fluctuation model. Moreover, this paper uses the Internet of Things algorithm to locate economic transactions and performs data processing to optimize the intelligent network system to improve the operating effect of the economic system. In addition, this paper improves the sensor node algorithm and proposes to use the weighted value of network node density to balance the positioning problem caused by the unbalanced distribution of network nodes in the detection area. Finally, this paper analyzes the market economy volatility model through the Internet of Things technology, combined with simulation experiments to explore the application of the Internet of Things technology in the economic market volatility model. Through experimental research, it can be known that economic market fluctuation models based on Internet of Things technology can play an important role in market economic analysis.
{"title":"Economic Market Fluctuation Model Based on Internet of Things Technology","authors":"Lu Zhai","doi":"10.1155/2022/2296823","DOIUrl":"https://doi.org/10.1155/2022/2296823","url":null,"abstract":"In order to explore the impact of the Internet of Things technology on economic market fluctuations and the analysis effect of the Internet of Things technology on economic market fluctuations, this paper uses the Internet of Things algorithm to improve the economic fluctuation model. Moreover, this paper uses the Internet of Things algorithm to locate economic transactions and performs data processing to optimize the intelligent network system to improve the operating effect of the economic system. In addition, this paper improves the sensor node algorithm and proposes to use the weighted value of network node density to balance the positioning problem caused by the unbalanced distribution of network nodes in the detection area. Finally, this paper analyzes the market economy volatility model through the Internet of Things technology, combined with simulation experiments to explore the application of the Internet of Things technology in the economic market volatility model. Through experimental research, it can be known that economic market fluctuation models based on Internet of Things technology can play an important role in market economic analysis.","PeriodicalId":21628,"journal":{"name":"Sci. Program.","volume":"21 1","pages":"2296823:1-2296823:11"},"PeriodicalIF":0.0,"publicationDate":"2022-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85969575","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work was to study the application value of dynamic electrocardiogram (ECG) feature data in evaluating the curative effect of percutaneous coronary intervention in acute ST-segment elevation myocardial infarction with hypertension, so as to facilitate the early diagnosis and treatment of the disease. In this study, 90 patients with acute ST-segment elevation myocardial infarction accompanied by hypertension were selected as the study subjects and randomly divided into group A (oral aspirin antiplatelet therapy), group B (thrombolytic drug streptokinase (SK) therapy), and group C (percutaneous coronary intervention), with 30 cases in each group. In addition, a P-wave detection algorithm was introduced for automatic detection and analysis of electrocardiograms, and the efficacy of patients was assessed by Holter feature data based on the P-wave detection algorithm. The results showed that the diagnostic error rate, sensitivity, and predictive accuracy of the P-wave detection algorithm for ST-segment elevation myocardial infarction caused by acute occlusion of left main coronary artery (LMCA) were 0.24%, 95.41%, and 92.33%, respectively; the diagnostic error rate, sensitivity, and predictive accuracy for non-LMCA (nLMCA) ST-segment elevation myocardial infarction were 0.28%, 95.32%, and 96.07%, respectively; the proportion of patients with symptom to blood flow patency time <3 h in group C (55.3%) was significantly higher than that in groups A and B (22.1% and 22.6%) ( P < 0.05). Compared with group A, the content of B-type natriuretic peptide (pre-proBNP) at 1 week, 2 weeks, and 3 weeks after treatment in groups B and C was significantly lower and group C was significantly lower than group B ( P < 0.05). In summary, the P-wave detection algorithm has a high application value in the diagnosis and early prediction of acute ST-segment elevation myocardial infarction. Percutaneous coronary intervention in the treatment of acute ST-segment elevation myocardial infarction with hypertension can shorten the opening time of infarction blood flow, so as to effectively protect the heart function of patients.
{"title":"Evaluation of Percutaneous Coronary Intervention for Acute ST-Segment Elevation Myocardial Infarction with Hypertension by Dynamic Electrocardiogram Feature Data","authors":"Guoqiang Wang, Yu Wang, Ru Zhao","doi":"10.1155/2022/8350079","DOIUrl":"https://doi.org/10.1155/2022/8350079","url":null,"abstract":"This work was to study the application value of dynamic electrocardiogram (ECG) feature data in evaluating the curative effect of percutaneous coronary intervention in acute ST-segment elevation myocardial infarction with hypertension, so as to facilitate the early diagnosis and treatment of the disease. In this study, 90 patients with acute ST-segment elevation myocardial infarction accompanied by hypertension were selected as the study subjects and randomly divided into group A (oral aspirin antiplatelet therapy), group B (thrombolytic drug streptokinase (SK) therapy), and group C (percutaneous coronary intervention), with 30 cases in each group. In addition, a P-wave detection algorithm was introduced for automatic detection and analysis of electrocardiograms, and the efficacy of patients was assessed by Holter feature data based on the P-wave detection algorithm. The results showed that the diagnostic error rate, sensitivity, and predictive accuracy of the P-wave detection algorithm for ST-segment elevation myocardial infarction caused by acute occlusion of left main coronary artery (LMCA) were 0.24%, 95.41%, and 92.33%, respectively; the diagnostic error rate, sensitivity, and predictive accuracy for non-LMCA (nLMCA) ST-segment elevation myocardial infarction were 0.28%, 95.32%, and 96.07%, respectively; the proportion of patients with symptom to blood flow patency time <3 h in group C (55.3%) was significantly higher than that in groups A and B (22.1% and 22.6%) (\u0000 \u0000 P\u0000 \u0000 < 0.05). Compared with group A, the content of B-type natriuretic peptide (pre-proBNP) at 1 week, 2 weeks, and 3 weeks after treatment in groups B and C was significantly lower and group C was significantly lower than group B (\u0000 \u0000 P\u0000 \u0000 < 0.05). In summary, the P-wave detection algorithm has a high application value in the diagnosis and early prediction of acute ST-segment elevation myocardial infarction. Percutaneous coronary intervention in the treatment of acute ST-segment elevation myocardial infarction with hypertension can shorten the opening time of infarction blood flow, so as to effectively protect the heart function of patients.","PeriodicalId":21628,"journal":{"name":"Sci. Program.","volume":"9 1","pages":"8350079:1-8350079:8"},"PeriodicalIF":0.0,"publicationDate":"2022-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85657973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In digital marketing, the core advantages of scientific and technological means such as artificial intelligence and big data analysis gradually appear and pay attention to them. This paper studies the accuracy of digital marketing and proposes an intelligent algorithm based on data analysis, which improves the effect of marketing communication. Through the combination of intelligent algorithms and big data analysis, the data are convincing. Through the comparison and improvement of intelligent algorithm logistic regression and XGBoost, this paper puts forward an improved algorithm of XGBoost based on Bayesian optimization parameters, which can improve the efficiency of digital marketing communication and enhance the social influence of digital marketing.
{"title":"Accurate Digital Marketing Communication Based on Intelligent Data Analysis","authors":"Zhuojun Li","doi":"10.1155/2022/8294891","DOIUrl":"https://doi.org/10.1155/2022/8294891","url":null,"abstract":"In digital marketing, the core advantages of scientific and technological means such as artificial intelligence and big data analysis gradually appear and pay attention to them. This paper studies the accuracy of digital marketing and proposes an intelligent algorithm based on data analysis, which improves the effect of marketing communication. Through the combination of intelligent algorithms and big data analysis, the data are convincing. Through the comparison and improvement of intelligent algorithm logistic regression and XGBoost, this paper puts forward an improved algorithm of XGBoost based on Bayesian optimization parameters, which can improve the efficiency of digital marketing communication and enhance the social influence of digital marketing.","PeriodicalId":21628,"journal":{"name":"Sci. Program.","volume":"55 1","pages":"8294891:1-8294891:10"},"PeriodicalIF":0.0,"publicationDate":"2022-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88120502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ideological and political course is a key course to implement the fundamental task of building morality and cultivating people. Teaching evaluation is an important part of the construction of ideological and political courses. Constructing a perfect teaching evaluation index system is an urgent need to further deepen the teaching reform of ideological and political courses and improve the teaching quality of ideological and political courses. In order to improve the practical application effect of mixed teaching mode, an online and offline mixed teaching effect evaluation method based on big data analysis is proposed. Firstly, the big data in the process of mixed teaching are collected by using big data technology, and the evaluation index system is constructed from three dimensions. The required data are extracted according to the index, and then the association rules between the relevant data of the evaluation index are established, the phase space distribution of the data is obtained. Finally, the constraint parameter analysis method is used to fuse the control variables and explanatory variables of the index-related data to realize the online and offline mixed teaching effect evaluation. The application analysis results show that the method in this paper obtains ideal evaluation results of online and offline mixed teaching effects, which is conducive to improving teaching quality.
{"title":"Evaluation Algorithm of Ideological and Political Assistant Teaching Effect in Colleges and Universities under Network Information Dissemination","authors":"Siyuan Hu, Jingsheng Wang","doi":"10.1155/2022/3589456","DOIUrl":"https://doi.org/10.1155/2022/3589456","url":null,"abstract":"Ideological and political course is a key course to implement the fundamental task of building morality and cultivating people. Teaching evaluation is an important part of the construction of ideological and political courses. Constructing a perfect teaching evaluation index system is an urgent need to further deepen the teaching reform of ideological and political courses and improve the teaching quality of ideological and political courses. In order to improve the practical application effect of mixed teaching mode, an online and offline mixed teaching effect evaluation method based on big data analysis is proposed. Firstly, the big data in the process of mixed teaching are collected by using big data technology, and the evaluation index system is constructed from three dimensions. The required data are extracted according to the index, and then the association rules between the relevant data of the evaluation index are established, the phase space distribution of the data is obtained. Finally, the constraint parameter analysis method is used to fuse the control variables and explanatory variables of the index-related data to realize the online and offline mixed teaching effect evaluation. The application analysis results show that the method in this paper obtains ideal evaluation results of online and offline mixed teaching effects, which is conducive to improving teaching quality.","PeriodicalId":21628,"journal":{"name":"Sci. Program.","volume":"211 1","pages":"3589456:1-3589456:7"},"PeriodicalIF":0.0,"publicationDate":"2022-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76303340","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sandhya Sharma, Sheifali Gupta, D. Gupta, Sapna Juneja, Gaurav Singal, G. Dhiman, S. Kautish
The challenges involved in the traditional cloud computing paradigms have prompted the development of architectures for the next generation cloud computing. The new cloud computing architectures can generate and handle huge amount of data, which was not possible to handle with the help of traditional architectures. Deep learning algorithms have the ability to process this huge amount of data and, thus, can now solve the problem of the next generation computing algorithms. Therefore, these days, deep learning has become the state-of-the-art approach for solving various tasks and most importantly in the field of recognition. In this work, recognition of city names is proposed. Recognition of handwritten city names is one of the potential research application areas in the field of postal automation For recognition using a segmentation-free approach (Holistic approach). This proposed work demystifies the role of convolutional neural network (CNN), which is one of the methods of deep learning technique. Proposed CNN model is trained, validated, and analyzed using Adam and stochastic gradient descent (SGD) optimizer with a batch size of 2, 4, and 8 and learning rate (LR) of 0.001, 0.01, and 0.1. The model is trained and validated on 10 different classes of the handwritten city names written in Gurmukhi script, where each class has 400 samples. Our analysis shows that the CNN model, using an Adam optimizer, batch size of 4, and a LR of 0.001, has achieved the best average validation accuracy of 99.13.
{"title":"Recognition of Gurmukhi Handwritten City Names Using Deep Learning and Cloud Computing","authors":"Sandhya Sharma, Sheifali Gupta, D. Gupta, Sapna Juneja, Gaurav Singal, G. Dhiman, S. Kautish","doi":"10.1155/2022/5945117","DOIUrl":"https://doi.org/10.1155/2022/5945117","url":null,"abstract":"The challenges involved in the traditional cloud computing paradigms have prompted the development of architectures for the next generation cloud computing. The new cloud computing architectures can generate and handle huge amount of data, which was not possible to handle with the help of traditional architectures. Deep learning algorithms have the ability to process this huge amount of data and, thus, can now solve the problem of the next generation computing algorithms. Therefore, these days, deep learning has become the state-of-the-art approach for solving various tasks and most importantly in the field of recognition. In this work, recognition of city names is proposed. Recognition of handwritten city names is one of the potential research application areas in the field of postal automation For recognition using a segmentation-free approach (Holistic approach). This proposed work demystifies the role of convolutional neural network (CNN), which is one of the methods of deep learning technique. Proposed CNN model is trained, validated, and analyzed using Adam and stochastic gradient descent (SGD) optimizer with a batch size of 2, 4, and 8 and learning rate (LR) of 0.001, 0.01, and 0.1. The model is trained and validated on 10 different classes of the handwritten city names written in Gurmukhi script, where each class has 400 samples. Our analysis shows that the CNN model, using an Adam optimizer, batch size of 4, and a LR of 0.001, has achieved the best average validation accuracy of 99.13.","PeriodicalId":21628,"journal":{"name":"Sci. Program.","volume":"1 1","pages":"5945117:1-5945117:16"},"PeriodicalIF":0.0,"publicationDate":"2022-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73163866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xuekui Ye, Li Zhang, Rongxia Liu, Yongjuan Liu, Guowei Jiang
Objective. This work aims to analyze the surgical timing and clinical efficacy of transvaginal cervical ring ligation based on the ultrasound image focus detection of patients with cervical insufficiency (CIC) under the ultrasound image theme generation model. Methods. 134 CIC patients who came to the hospital for ultrasound imaging diagnosis were collected. Observation group was treated with cervical cerclage (CC) and the pregnancy outcome was followed up. Control group was treated conservatively. Results. For patients in the control group, average gestational age was 21.12 ± 2.18 weeks, average cervical length (CL) was 15.54 ± 0.42 mm, and average uterine opening width was 3.06 ± 0.63 mm. In the observation group, average gestational age was 24.45 ± 4.12 weeks, average CL was 17.32 ± 4.09 mm, and average uterine opening width was 0.21 mm. There were significant differences between the two groups ( P <