Pub Date : 2021-12-09DOI: 10.1177/1063293X211050438
Mouna Fradi, R. Gaha, F. Mhenni, A. Mlika, J. Choley
In mechatronic collaborative design, there is a synergic integration of several expert domains, where heterogeneous knowledge needs to be shared. To address this challenge, ontology-based approaches are proposed as a solution to overtake this heterogeneity. However, dynamic exchange between design teams is overlooked. Consequently, parametric-based approaches are developed to use constraints and parameters consistently during collaborative design. The most valuable knowledge that needs to be capitalized, which we call crucial knowledge, is identified with informal solutions. Thus, a formal identification and extraction is required. In this paper, we propose a new methodology to formalize the interconnection between stakeholders and facilitate the extraction and capitalization of crucial knowledge during the collaboration, based on the mathematical theory ‘Category Theory’ (CT). Firstly, we present an overview of most used methods for crucial knowledge identification in the context of collaborative design as well as a brief review of CT basic concepts. Secondly, we propose a methodology to formally extract crucial knowledge based on some fundamental concepts of category theory. Finally, a case study is considered to validate the proposed methodology.
{"title":"Knowledge capitalization in mechatronic collaborative design","authors":"Mouna Fradi, R. Gaha, F. Mhenni, A. Mlika, J. Choley","doi":"10.1177/1063293X211050438","DOIUrl":"https://doi.org/10.1177/1063293X211050438","url":null,"abstract":"In mechatronic collaborative design, there is a synergic integration of several expert domains, where heterogeneous knowledge needs to be shared. To address this challenge, ontology-based approaches are proposed as a solution to overtake this heterogeneity. However, dynamic exchange between design teams is overlooked. Consequently, parametric-based approaches are developed to use constraints and parameters consistently during collaborative design. The most valuable knowledge that needs to be capitalized, which we call crucial knowledge, is identified with informal solutions. Thus, a formal identification and extraction is required. In this paper, we propose a new methodology to formalize the interconnection between stakeholders and facilitate the extraction and capitalization of crucial knowledge during the collaboration, based on the mathematical theory ‘Category Theory’ (CT). Firstly, we present an overview of most used methods for crucial knowledge identification in the context of collaborative design as well as a brief review of CT basic concepts. Secondly, we propose a methodology to formally extract crucial knowledge based on some fundamental concepts of category theory. Finally, a case study is considered to validate the proposed methodology.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"30 1","pages":"32 - 45"},"PeriodicalIF":0.0,"publicationDate":"2021-12-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82376722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Under the trend of concurrent engineering, the correspondence between functions and physical structures in product design is gaining importance. Between the functions and parts, connectors are the basic unit for engineers to consider. Moreover, the relationship between connector-liaison-part will help accomplish the integration of information. Such efforts will help the development of the Knowledge Intensive CAD (KICAD) system. Therefore, we proposed a Connector-liaison-part-based disassembly sequence planning (DSP) in this study. First, the authors construct a release diagram through an interference relationship to express the priority of disassembly between parts. The release diagram will allow designers to review the rationality of product disassembly planning. Then, the cost calculation method and disassembly time matrix are established. Last, the greedy algorithm is used to find an appropriate disassembly sequence and seek suggestions for design improvement. Through the reference information, the function and corresponding modules are improved, from which the disassembly value of a product can be reviewed from a functional perspective. In this study, a fixed support holder is used as an example to validate the proposed method. The discussion of the connector-liaison-part will help the integration of the DSP and the functional connector approach.
{"title":"Connector-link-part-based disassembly sequence planning","authors":"Hwai-En Tseng, Chien-Cheng Chang, Shih-Chen Lee, Cih-Chi Chen","doi":"10.1177/1063293X211050930","DOIUrl":"https://doi.org/10.1177/1063293X211050930","url":null,"abstract":"Under the trend of concurrent engineering, the correspondence between functions and physical structures in product design is gaining importance. Between the functions and parts, connectors are the basic unit for engineers to consider. Moreover, the relationship between connector-liaison-part will help accomplish the integration of information. Such efforts will help the development of the Knowledge Intensive CAD (KICAD) system. Therefore, we proposed a Connector-liaison-part-based disassembly sequence planning (DSP) in this study. First, the authors construct a release diagram through an interference relationship to express the priority of disassembly between parts. The release diagram will allow designers to review the rationality of product disassembly planning. Then, the cost calculation method and disassembly time matrix are established. Last, the greedy algorithm is used to find an appropriate disassembly sequence and seek suggestions for design improvement. Through the reference information, the function and corresponding modules are improved, from which the disassembly value of a product can be reviewed from a functional perspective. In this study, a fixed support holder is used as an example to validate the proposed method. The discussion of the connector-liaison-part will help the integration of the DSP and the functional connector approach.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"22 1","pages":"67 - 79"},"PeriodicalIF":0.0,"publicationDate":"2021-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84335349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-05DOI: 10.1177/1063293X211058450
V. D, L. Venkataramana, S. S, Sarah Mathew, S. V
Deep neural networks can be used to perform nonlinear operations at multiple levels, such as a neural network that is composed of many hidden layers. Although deep learning approaches show good results, they have a drawback called catastrophic forgetting, which is a reduction in performance when a new class is added. Incremental learning is a learning method where existing knowledge should be retained even when new data is acquired. It involves learning with multiple batches of training data and the newer learning sessions do not require the data used in the previous iterations. The Bayesian approach to incremental learning uses the concept of the probability distribution of weights. The key idea of Bayes theorem is to find an updated distribution of weights and biases. In the Bayesian framework, the beliefs can be updated iteratively as the new data comes in. Bayesian framework allows to update the beliefs iteratively in real-time as data comes in. The Bayesian model for incremental learning showed an accuracy of 82%. The execution time for the Bayesian model was lesser on GPU (670 s) when compared to CPU (1165 s).
{"title":"Bayesian approach to incremental batch learning on forest cover sensor data for multiclass classification","authors":"V. D, L. Venkataramana, S. S, Sarah Mathew, S. V","doi":"10.1177/1063293X211058450","DOIUrl":"https://doi.org/10.1177/1063293X211058450","url":null,"abstract":"Deep neural networks can be used to perform nonlinear operations at multiple levels, such as a neural network that is composed of many hidden layers. Although deep learning approaches show good results, they have a drawback called catastrophic forgetting, which is a reduction in performance when a new class is added. Incremental learning is a learning method where existing knowledge should be retained even when new data is acquired. It involves learning with multiple batches of training data and the newer learning sessions do not require the data used in the previous iterations. The Bayesian approach to incremental learning uses the concept of the probability distribution of weights. The key idea of Bayes theorem is to find an updated distribution of weights and biases. In the Bayesian framework, the beliefs can be updated iteratively as the new data comes in. Bayesian framework allows to update the beliefs iteratively in real-time as data comes in. The Bayesian model for incremental learning showed an accuracy of 82%. The execution time for the Bayesian model was lesser on GPU (670 s) when compared to CPU (1165 s).","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"6 1","pages":"405 - 414"},"PeriodicalIF":0.0,"publicationDate":"2021-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72817337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-29DOI: 10.1177/1063293X211073714
Jayasudha Jc, L. S
In the recent past, Non-Destructive Testing (NDT) has become the most popular technique due to its efficiency and accuracy without destroying the object and maintaining its original structure and gathering while examining external and internal welding defects. Generally, the NDT environment is harmful which is distinguished by huge volatile fields of electromagnetic, elevated radiation emission instability, and elevated heat. Therefore, a suitable NDT approach could be recognized and practiced. In this paper, a novel algorithm is proposed based on a Phased array ultrasonic test (PAUT) for NDT to attain the proper test attributes. In the proposed methodology, the carbon steel welding section is synthetically produced with various defects and tested using the PAUT method. The signals which are acquired from the PAUT device are having noise. The Adaptive Least Mean Square (ALMS) filter is proposed to filter PAUT signal to eliminate random noise and Gaussian noise. The ALMS filter is the combination of low pass filter (LPF), high pass filter (HPF), and bandpass filter (BPF). The time-domain PAUT signal is converted into a frequency-domain signal to extract more features by applying the Empirical Wavelet Transform (EWT) algorithm. In the frequency domain signal, first order and second order features extraction techniques are applied to extract various features for further classification. The Deep Learning methodology is proposed for the classification of PAUT signals. Based on the PAUT signal features, the Deep Convolution Neural Network (DCNN) is applied for further classification. The DCNN will classify the welding signal as to whether it is defective or non-defective. The Confusion Matrix (CM) is used for the estimation of measurement of performance of classification as calculating accuracy, sensitivity, and specificity. The experiments prove that the proposed methodology for PAUT testing for welding defect classification is obtained more accurately and efficiently across existing methodologies by providing numerical and graphical results.
{"title":"Phased array ultrasonic test signal enhancement and classification using Empirical Wavelet Transform and Deep Convolution Neural Network","authors":"Jayasudha Jc, L. S","doi":"10.1177/1063293X211073714","DOIUrl":"https://doi.org/10.1177/1063293X211073714","url":null,"abstract":"In the recent past, Non-Destructive Testing (NDT) has become the most popular technique due to its efficiency and accuracy without destroying the object and maintaining its original structure and gathering while examining external and internal welding defects. Generally, the NDT environment is harmful which is distinguished by huge volatile fields of electromagnetic, elevated radiation emission instability, and elevated heat. Therefore, a suitable NDT approach could be recognized and practiced. In this paper, a novel algorithm is proposed based on a Phased array ultrasonic test (PAUT) for NDT to attain the proper test attributes. In the proposed methodology, the carbon steel welding section is synthetically produced with various defects and tested using the PAUT method. The signals which are acquired from the PAUT device are having noise. The Adaptive Least Mean Square (ALMS) filter is proposed to filter PAUT signal to eliminate random noise and Gaussian noise. The ALMS filter is the combination of low pass filter (LPF), high pass filter (HPF), and bandpass filter (BPF). The time-domain PAUT signal is converted into a frequency-domain signal to extract more features by applying the Empirical Wavelet Transform (EWT) algorithm. In the frequency domain signal, first order and second order features extraction techniques are applied to extract various features for further classification. The Deep Learning methodology is proposed for the classification of PAUT signals. Based on the PAUT signal features, the Deep Convolution Neural Network (DCNN) is applied for further classification. The DCNN will classify the welding signal as to whether it is defective or non-defective. The Confusion Matrix (CM) is used for the estimation of measurement of performance of classification as calculating accuracy, sensitivity, and specificity. The experiments prove that the proposed methodology for PAUT testing for welding defect classification is obtained more accurately and efficiently across existing methodologies by providing numerical and graphical results.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"60 1","pages":"229 - 236"},"PeriodicalIF":0.0,"publicationDate":"2021-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91234909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1177/1063293X211025105
K. Vijayakumar, V. J. Kadam, S. Sharma
Deep Neural Network (DNN) stands for multilayered Neural Network (NN) that is capable of progressively learn the more abstract and composite representations of the raw features of the input data received, with no need for any feature engineering. They are advanced NNs having repetitious hidden layers between the initial input and the final layer. The working principle of such a standard deep classifier is based on a hierarchy formed by the composition of linear functions and a defined nonlinear Activation Function (AF). It remains uncertain (not clear) how the DNN classifier can function so well. But it is clear from many studies that within DNN, the AF choice has a notable impact on the kinetics of training and the success of tasks. In the past few years, different AFs have been formulated. The choice of AF is still an area of active study. Hence, in this study, a novel deep Feed forward NN model with four AFs has been proposed for breast cancer classification: hidden layer 1: Swish, hidden layer, 2:-LeakyReLU, hidden layer 3: ReLU, and final output layer: naturally Sigmoidal. The purpose of the study is twofold. Firstly, this study is a step toward a more profound understanding of DNN with layer-wise different AFs. Secondly, research is also aimed to explore better DNN-based systems to build predictive models for breast cancer data with improved accuracy. Therefore, the benchmark UCI dataset WDBC was used for the validation of the framework and evaluated using a ten-fold CV method and various performance indicators. Multiple simulations and outcomes of the experimentations have shown that the proposed solution performs in a better way than the Sigmoid, ReLU, and LeakyReLU and Swish activation DNN in terms of different parameters. This analysis contributes to producing an expert and precise clinical dataset classification method for breast cancer. Furthermore, the model also achieved improved performance compared to many established state-of-the-art algorithms/models.
{"title":"Breast cancer diagnosis using multiple activation deep neural network","authors":"K. Vijayakumar, V. J. Kadam, S. Sharma","doi":"10.1177/1063293X211025105","DOIUrl":"https://doi.org/10.1177/1063293X211025105","url":null,"abstract":"Deep Neural Network (DNN) stands for multilayered Neural Network (NN) that is capable of progressively learn the more abstract and composite representations of the raw features of the input data received, with no need for any feature engineering. They are advanced NNs having repetitious hidden layers between the initial input and the final layer. The working principle of such a standard deep classifier is based on a hierarchy formed by the composition of linear functions and a defined nonlinear Activation Function (AF). It remains uncertain (not clear) how the DNN classifier can function so well. But it is clear from many studies that within DNN, the AF choice has a notable impact on the kinetics of training and the success of tasks. In the past few years, different AFs have been formulated. The choice of AF is still an area of active study. Hence, in this study, a novel deep Feed forward NN model with four AFs has been proposed for breast cancer classification: hidden layer 1: Swish, hidden layer, 2:-LeakyReLU, hidden layer 3: ReLU, and final output layer: naturally Sigmoidal. The purpose of the study is twofold. Firstly, this study is a step toward a more profound understanding of DNN with layer-wise different AFs. Secondly, research is also aimed to explore better DNN-based systems to build predictive models for breast cancer data with improved accuracy. Therefore, the benchmark UCI dataset WDBC was used for the validation of the framework and evaluated using a ten-fold CV method and various performance indicators. Multiple simulations and outcomes of the experimentations have shown that the proposed solution performs in a better way than the Sigmoid, ReLU, and LeakyReLU and Swish activation DNN in terms of different parameters. This analysis contributes to producing an expert and precise clinical dataset classification method for breast cancer. Furthermore, the model also achieved improved performance compared to many established state-of-the-art algorithms/models.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"36 1","pages":"275 - 284"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87783986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1177/1063293X211026275
Sheldon Williamson, K. Vijayakumar
Artificial intelligence (AI) has navigated away from public skepticism, back into the limelight in an impactful way. From an application perspective, it is largely accepted that the industrial implications of AI will be significant, even if the broader societal implications are still under question. AI has the power to drive competitiveness in the industrial sphere in a manner that has not been seen in the past. According to a Goldman Sachs report about the foreseeable impact of this formidable technology, businesses which do not learn to leverage AI technologies are at the risk of being left behind in the competitive market of enterprises. A key role that AI techniques will play in industrial environments would undoubtedly be that of automation. Streamlining industrial processes by reducing the redundancy of human intervention is a strategy of importance for businesses to both increase revenue and spend more time on product innovation. The world is entering a new phase of industrialization, commonly termed as Industry 4.0. The application of cutting edge technologies like AI is paramount in building smart systems that allow industries to gain a competitive edge. The industrial transformation is aided in part by smart manufacturing and data exchange which contribute to high-level industrial automation. The Industrial Internet of Things (IIoT) forms an internetwork of a vast number of machinery, tools, and other devices which amalgamate into a smart system that ultimately allow for greater efficiency and productivity in high-stakes situations in industries. Intelligent devices that form a smart system have the ability to use embedded automation software to perform repetitive tasks and solve complex problems autonomously. For this reason, it is generally agreed upon that industrial applications of smart systems using AI would significantly improve reliability, production, and customer satisfaction by improving accuracy and reducing errors at rates beyond human capacity. A Globe Newswire report from 2019 has found that ‘‘AI in industrial machines will reach $415 million globally by 2024 with collaborative robot growth at a compound annual growth rate of 42.5%.’’ Inevitably, the integration of AI algorithms and techniques enhances the ability of enterprises to leverage the power of IIoT and big data analytics to provide value to their market segments. However, some functional challenges hinder the process of integrating industrial activities into the smart machine ecosystem. A particularly persistent problem is that of securely storing, efficiently processing, and profitably analyzing the enormous volume of data that is generated from sensors in the smart systems. Businesses often find it difficult to integrate new technologies into seemingly sturdy existing systems. AI algorithms must be functionally supported by data analytics and smart systems must employ robust security frameworks in order for automation systems to truly help businesses meet thei
{"title":"Artificial intelligence techniques for industrial automation and smart systems","authors":"Sheldon Williamson, K. Vijayakumar","doi":"10.1177/1063293X211026275","DOIUrl":"https://doi.org/10.1177/1063293X211026275","url":null,"abstract":"Artificial intelligence (AI) has navigated away from public skepticism, back into the limelight in an impactful way. From an application perspective, it is largely accepted that the industrial implications of AI will be significant, even if the broader societal implications are still under question. AI has the power to drive competitiveness in the industrial sphere in a manner that has not been seen in the past. According to a Goldman Sachs report about the foreseeable impact of this formidable technology, businesses which do not learn to leverage AI technologies are at the risk of being left behind in the competitive market of enterprises. A key role that AI techniques will play in industrial environments would undoubtedly be that of automation. Streamlining industrial processes by reducing the redundancy of human intervention is a strategy of importance for businesses to both increase revenue and spend more time on product innovation. The world is entering a new phase of industrialization, commonly termed as Industry 4.0. The application of cutting edge technologies like AI is paramount in building smart systems that allow industries to gain a competitive edge. The industrial transformation is aided in part by smart manufacturing and data exchange which contribute to high-level industrial automation. The Industrial Internet of Things (IIoT) forms an internetwork of a vast number of machinery, tools, and other devices which amalgamate into a smart system that ultimately allow for greater efficiency and productivity in high-stakes situations in industries. Intelligent devices that form a smart system have the ability to use embedded automation software to perform repetitive tasks and solve complex problems autonomously. For this reason, it is generally agreed upon that industrial applications of smart systems using AI would significantly improve reliability, production, and customer satisfaction by improving accuracy and reducing errors at rates beyond human capacity. A Globe Newswire report from 2019 has found that ‘‘AI in industrial machines will reach $415 million globally by 2024 with collaborative robot growth at a compound annual growth rate of 42.5%.’’ Inevitably, the integration of AI algorithms and techniques enhances the ability of enterprises to leverage the power of IIoT and big data analytics to provide value to their market segments. However, some functional challenges hinder the process of integrating industrial activities into the smart machine ecosystem. A particularly persistent problem is that of securely storing, efficiently processing, and profitably analyzing the enormous volume of data that is generated from sensors in the smart systems. Businesses often find it difficult to integrate new technologies into seemingly sturdy existing systems. AI algorithms must be functionally supported by data analytics and smart systems must employ robust security frameworks in order for automation systems to truly help businesses meet thei","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"35 1","pages":"291 - 292"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82650789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1177/1063293X211027869
K. T. Sreelatha, V. K. Krishna Reddy
Cloud environment greatly necessitates two key factors namely integrity and memory consumption. In the proposed work, an efficient integrity check system (EICS) is presented for electronic health record (EHR) classification. The existing system does not concentrate on storage concerns such as storing and retrieving files in cloud and memory storage overheads. De-duplication is one of the solution, however original information loss might take place. This is mitigated by the suggested research work namely Integrity and Memory Consumption aware De-duplication Method (IMCDM), where health care files are stored in secured and reliable manner. File Indexed table are created for all the files for enhancing de-duplication performance before uploading it into server. Duplication existence can be obtained from the indexing table which comprises of file features and hash values. Support vector machine (SVM) classifier is used in indexing table construction for file feature learning. Labels allotted through SVM classifier is considered as index values. Two level encryption is used followed by indexing construction, and stored in cloud severs. For avoiding redundant data, a decrypted hash index comparison is performed with previously stored contents. Various security key based on individual user’s generation is carried for ensuring security and XOR operation is performed with received encrypted file. The evaluation is performed using the Java simulation tool, which aids in validating the proposed methodology against existing research.
{"title":"Integrity and memory consumption aware electronic health record handling in cloud","authors":"K. T. Sreelatha, V. K. Krishna Reddy","doi":"10.1177/1063293X211027869","DOIUrl":"https://doi.org/10.1177/1063293X211027869","url":null,"abstract":"Cloud environment greatly necessitates two key factors namely integrity and memory consumption. In the proposed work, an efficient integrity check system (EICS) is presented for electronic health record (EHR) classification. The existing system does not concentrate on storage concerns such as storing and retrieving files in cloud and memory storage overheads. De-duplication is one of the solution, however original information loss might take place. This is mitigated by the suggested research work namely Integrity and Memory Consumption aware De-duplication Method (IMCDM), where health care files are stored in secured and reliable manner. File Indexed table are created for all the files for enhancing de-duplication performance before uploading it into server. Duplication existence can be obtained from the indexing table which comprises of file features and hash values. Support vector machine (SVM) classifier is used in indexing table construction for file feature learning. Labels allotted through SVM classifier is considered as index values. Two level encryption is used followed by indexing construction, and stored in cloud severs. For avoiding redundant data, a decrypted hash index comparison is performed with previously stored contents. Various security key based on individual user’s generation is carried for ensuring security and XOR operation is performed with received encrypted file. The evaluation is performed using the Java simulation tool, which aids in validating the proposed methodology against existing research.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"23 1","pages":"258 - 265"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85427026","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-12DOI: 10.1177/1063293X211032622
K. Valarmathi, S. Kanaga Suba Raja
Future computation of cloud datacenter resource usage is a provoking task due to dynamic and Business Critic workloads. Accurate prediction of cloud resource utilization through historical observation facilitates, effectively aligning the task with resources, estimating the capacity of a cloud server, applying intensive auto-scaling and controlling resource usage. As imprecise prediction of resources leads to either low or high provisioning of resources in the cloud. This paper focuses on solving this problem in a more proactive way. Most of the existing prediction models are based on a mono pattern of workload which is not suitable for handling peculiar workloads. The researchers address this problem by making use of a contemporary model to dynamically analyze the CPU utilization, so as to precisely estimate data center CPU utilization. The proposed design makes use of an Ensemble Random Forest-Long Short Term Memory based deep architectural models for resource estimation. This design preprocesses and trains data based on historical observation. The approach is analyzed by using a real cloud data set. The empirical interpretation depicts that the proposed design outperforms the previous approaches as it bears 30%–60% enhanced accuracy in resource utilization.
{"title":"Resource utilization prediction technique in cloud using knowledge based ensemble random forest with LSTM model","authors":"K. Valarmathi, S. Kanaga Suba Raja","doi":"10.1177/1063293X211032622","DOIUrl":"https://doi.org/10.1177/1063293X211032622","url":null,"abstract":"Future computation of cloud datacenter resource usage is a provoking task due to dynamic and Business Critic workloads. Accurate prediction of cloud resource utilization through historical observation facilitates, effectively aligning the task with resources, estimating the capacity of a cloud server, applying intensive auto-scaling and controlling resource usage. As imprecise prediction of resources leads to either low or high provisioning of resources in the cloud. This paper focuses on solving this problem in a more proactive way. Most of the existing prediction models are based on a mono pattern of workload which is not suitable for handling peculiar workloads. The researchers address this problem by making use of a contemporary model to dynamically analyze the CPU utilization, so as to precisely estimate data center CPU utilization. The proposed design makes use of an Ensemble Random Forest-Long Short Term Memory based deep architectural models for resource estimation. This design preprocesses and trains data based on historical observation. The approach is analyzed by using a real cloud data set. The empirical interpretation depicts that the proposed design outperforms the previous approaches as it bears 30%–60% enhanced accuracy in resource utilization.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"1 1","pages":"396 - 404"},"PeriodicalIF":0.0,"publicationDate":"2021-08-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89924510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-08-06DOI: 10.1177/1063293X211032343
Jie Gao, X. Yan, Hong Guo
Manufacturing service composition and optimal selection (SCOS) is a key technology that improves resource utilization and reduces the cost in discrete manufacturing. However, the lack of evaluation of the service composition function and the unconformity of the actual composition vague characteristics, resulting in the incomplete evaluation of the service composition. Additionally, various optimization and selection algorithms have defects of premature convergence and low efficiency. At the same time, the fitness value distribution of the service composition has a non-linear characteristic. In this article, a framework called discrete manufacturing SCOS (DMSCOS) is proposed to overcome these issues. DMSCOS uses the functional interval parameter and fuzzy QoS attribute aware evaluation model (FIPFQA) to achieve composition evaluation and introduces a moving window flower pollination algorithm (MWFPA) to achieve optimization and selection for the non-linear characteristic population. Experiments show that DMSCOS has good performance for optimization and selection. The FIPFQA has a good effect on service composition evaluation. Furthermore, compared with two other extended algorithms, the proposed MWFPA performs better when addressing the optimal and selection problem.
{"title":"A discrete manufacturing SCOS framework based on functional interval parameters and fuzzy QoS attributes using moving window FPA","authors":"Jie Gao, X. Yan, Hong Guo","doi":"10.1177/1063293X211032343","DOIUrl":"https://doi.org/10.1177/1063293X211032343","url":null,"abstract":"Manufacturing service composition and optimal selection (SCOS) is a key technology that improves resource utilization and reduces the cost in discrete manufacturing. However, the lack of evaluation of the service composition function and the unconformity of the actual composition vague characteristics, resulting in the incomplete evaluation of the service composition. Additionally, various optimization and selection algorithms have defects of premature convergence and low efficiency. At the same time, the fitness value distribution of the service composition has a non-linear characteristic. In this article, a framework called discrete manufacturing SCOS (DMSCOS) is proposed to overcome these issues. DMSCOS uses the functional interval parameter and fuzzy QoS attribute aware evaluation model (FIPFQA) to achieve composition evaluation and introduces a moving window flower pollination algorithm (MWFPA) to achieve optimization and selection for the non-linear characteristic population. Experiments show that DMSCOS has good performance for optimization and selection. The FIPFQA has a good effect on service composition evaluation. Furthermore, compared with two other extended algorithms, the proposed MWFPA performs better when addressing the optimal and selection problem.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"22 1","pages":"46 - 66"},"PeriodicalIF":0.0,"publicationDate":"2021-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83655035","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-23DOI: 10.1177/1063293X211031936
Han Yang, Chongzhong Jia, Jifeng Xie, Kun Wang, Xiaoling Hao
In view of the problems in traditional 3D scene simulation, such as the poor simulation effect and the inability to really feel the scene, this paper proposes the research of nano particle system scene construction based on virtual technology. By analyzing the advantages of virtual reality technology, the role of virtual reality in three-dimensional scene is determined; the method of three-dimensional geometry transformation is used to determine the scene building algorithm of virtual technology; the concept of nano particle system hierarchy is introduced to build nano particle subsystem with object-oriented concept. The functions of the system are mainly divided into system control module, user interaction module, scene management module, and nanoparticles management module. Based on the analysis of virtual technology and the construction of nano particle system, the construction of nano particle system scene based on virtual technology is realized. The experimental results show that: Based on the virtual technology, the nano particle system scene construction effect is better, and the scene construction time is less than 6 min, the work efficiency is higher, the scene is more realistic, and has a certain feasibility.
{"title":"Scene construction of nano particle system based on virtual technology","authors":"Han Yang, Chongzhong Jia, Jifeng Xie, Kun Wang, Xiaoling Hao","doi":"10.1177/1063293X211031936","DOIUrl":"https://doi.org/10.1177/1063293X211031936","url":null,"abstract":"In view of the problems in traditional 3D scene simulation, such as the poor simulation effect and the inability to really feel the scene, this paper proposes the research of nano particle system scene construction based on virtual technology. By analyzing the advantages of virtual reality technology, the role of virtual reality in three-dimensional scene is determined; the method of three-dimensional geometry transformation is used to determine the scene building algorithm of virtual technology; the concept of nano particle system hierarchy is introduced to build nano particle subsystem with object-oriented concept. The functions of the system are mainly divided into system control module, user interaction module, scene management module, and nanoparticles management module. Based on the analysis of virtual technology and the construction of nano particle system, the construction of nano particle system scene based on virtual technology is realized. The experimental results show that: Based on the virtual technology, the nano particle system scene construction effect is better, and the scene construction time is less than 6 min, the work efficiency is higher, the scene is more realistic, and has a certain feasibility.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"9 1","pages":"135 - 147"},"PeriodicalIF":0.0,"publicationDate":"2021-07-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89118322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}