Pub Date : 2022-08-04DOI: 10.1177/1063293X221117291
Xinhui Kang, Qi Zhu
Through the prevalence of sustainable ideas, automobiles are increasingly pursuing environmental protection strategies for green design, the non-traditional hybrid electric vehicles (HEV) are promoted continuously. If the company can add emotional value to the modeling of HEV, it will be helpful to its sustainable design and sales promotion of it. Therefore, an innovative model combining fuzzy linguistic preference relations (FLPR) and fuzzy quality function deployment (QFD) is proposed here to explore the connection between customer sentiment and the front view of the HEV. Compared with the previous methods, FLPR has the advantages of fewer comparison times and high consistency. First, find out the customer’s emotional expectations and attribute weight ranking for HEV through FLPR, and import customer requirements (CRs) on the left side of fuzzy QFD. Second, the grey prediction model was used to screen out the key engineering features (ECs) and the initial weight of HEV. Finally, based on human subjective imprecise natural semantics, fuzzy QFD established a matrix association between CRs and key ECs, then finally obtained the optimal combination of ECs' final weight and morphological design. The results can assist designers to shorten product development cycles and improve customers' emotional satisfaction, which provides a theoretical reference for the sustainable design and marketing of environmentally friendly cars in the future.
{"title":"Integrated fuzzy linguistic preference relations approach and fuzzy Quality Function Deployment to the sustainable design of hybrid electric vehicles","authors":"Xinhui Kang, Qi Zhu","doi":"10.1177/1063293X221117291","DOIUrl":"https://doi.org/10.1177/1063293X221117291","url":null,"abstract":"Through the prevalence of sustainable ideas, automobiles are increasingly pursuing environmental protection strategies for green design, the non-traditional hybrid electric vehicles (HEV) are promoted continuously. If the company can add emotional value to the modeling of HEV, it will be helpful to its sustainable design and sales promotion of it. Therefore, an innovative model combining fuzzy linguistic preference relations (FLPR) and fuzzy quality function deployment (QFD) is proposed here to explore the connection between customer sentiment and the front view of the HEV. Compared with the previous methods, FLPR has the advantages of fewer comparison times and high consistency. First, find out the customer’s emotional expectations and attribute weight ranking for HEV through FLPR, and import customer requirements (CRs) on the left side of fuzzy QFD. Second, the grey prediction model was used to screen out the key engineering features (ECs) and the initial weight of HEV. Finally, based on human subjective imprecise natural semantics, fuzzy QFD established a matrix association between CRs and key ECs, then finally obtained the optimal combination of ECs' final weight and morphological design. The results can assist designers to shorten product development cycles and improve customers' emotional satisfaction, which provides a theoretical reference for the sustainable design and marketing of environmentally friendly cars in the future.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"13 1","pages":"367 - 381"},"PeriodicalIF":0.0,"publicationDate":"2022-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83487883","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Incorporating consumer choice behavior into a product line design optimization model enhances the understanding of consumer choices and improves the opportunities to increase profit. Most product line optimization problems assume that parameters are precisely known in consumer choice model. However, the decision maker does not precisely know the model parameters because of insufficient sample data, measurement problems, and other factors. We investigate the problem of establishing robust product line pricing under a multinomial logit model to account for the uncertainty of the valuation parameter. First, we present a nominal product line model to maximize profit. We then establish a robust product line model to maximize the worst-case expected profit, where the valuation parameter lies in an uncertainty set. We consider both single and multiple products development and derive the optimal prices’ closed-form expressions. Through numerical experiments, we illustrate the benefit of robust product line pricing to address parameter uncertainty. We demonstrate that the difference between the expected nominal profit and the worst-case profit increases with the increase of the interval of the uncertainty set, and the robust profit relative to the worst-case nominal profit improves. The robust product line design can ensure steadier, even higher profit.
{"title":"Robust product line pricing under the multinomial logit choice model","authors":"Wei Qi, Xinggang Luo, Xuwang Liu, Zhongliang Zhang","doi":"10.1177/1063293X221102205","DOIUrl":"https://doi.org/10.1177/1063293X221102205","url":null,"abstract":"Incorporating consumer choice behavior into a product line design optimization model enhances the understanding of consumer choices and improves the opportunities to increase profit. Most product line optimization problems assume that parameters are precisely known in consumer choice model. However, the decision maker does not precisely know the model parameters because of insufficient sample data, measurement problems, and other factors. We investigate the problem of establishing robust product line pricing under a multinomial logit model to account for the uncertainty of the valuation parameter. First, we present a nominal product line model to maximize profit. We then establish a robust product line model to maximize the worst-case expected profit, where the valuation parameter lies in an uncertainty set. We consider both single and multiple products development and derive the optimal prices’ closed-form expressions. Through numerical experiments, we illustrate the benefit of robust product line pricing to address parameter uncertainty. We demonstrate that the difference between the expected nominal profit and the worst-case profit increases with the increase of the interval of the uncertainty set, and the robust profit relative to the worst-case nominal profit improves. The robust product line design can ensure steadier, even higher profit.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"22 1","pages":"273 - 282"},"PeriodicalIF":0.0,"publicationDate":"2022-06-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75934609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-07DOI: 10.1177/1063293X221104527
Yu-ling Jiao, Xue Deng, Lin Li, Xinran Liu, Nan Cao
In order to improve the efficiency of assembly line and optimize the layout, this paper presents a collaborative optimization model for a two-sided U-type assembly line and a novel design with p-l partition layout is proposed to minimize number of workstations without increasing the length of the assembly line. Considering the task orientation and time sequencing in cross-workstation, the mathematical model of two-sided U-type assembly line balancing problem is derived. A multi-level priority rule heuristic algorithm is developed to drive the optimization process. The multi-level priority rule heuristic algorithm, modified particle swarm optimization algorithm, and the bi-objective integer programming method are applied to 20 classic examples, respectively. The calculation results suggest that the optimal results of the proposed method account for 90%, which verifies the rationality of the collaborative optimization model and algorithm, and provides a useful reference for the modeling and solution of the two-sided U-type assembly line balancing problems.
{"title":"Modeling and solving the two-sided U-type assembly line balance based on a heuristic algorithm of a multi-priority rule","authors":"Yu-ling Jiao, Xue Deng, Lin Li, Xinran Liu, Nan Cao","doi":"10.1177/1063293X221104527","DOIUrl":"https://doi.org/10.1177/1063293X221104527","url":null,"abstract":"In order to improve the efficiency of assembly line and optimize the layout, this paper presents a collaborative optimization model for a two-sided U-type assembly line and a novel design with p-l partition layout is proposed to minimize number of workstations without increasing the length of the assembly line. Considering the task orientation and time sequencing in cross-workstation, the mathematical model of two-sided U-type assembly line balancing problem is derived. A multi-level priority rule heuristic algorithm is developed to drive the optimization process. The multi-level priority rule heuristic algorithm, modified particle swarm optimization algorithm, and the bi-objective integer programming method are applied to 20 classic examples, respectively. The calculation results suggest that the optimal results of the proposed method account for 90%, which verifies the rationality of the collaborative optimization model and algorithm, and provides a useful reference for the modeling and solution of the two-sided U-type assembly line balancing problems.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"23 1","pages":"262 - 272"},"PeriodicalIF":0.0,"publicationDate":"2022-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90050682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-05DOI: 10.1177/1063293X221106501
S. M., Varalakshmi Perumal, Gowtham Yuvaraj, Sakthi Jaya Sundar Rajasekar
The survival percentage of lung patients can be improved if pneumonia is detected early. Images of the chest X-ray (CXR) are the most common way of identifying and diagnosing pneumonia. A competent radiologist faces a difficult problem in detecting pneumonia from CXR images. Many people are at danger of contracting pneumonia, especially in developing countries where billions of people live in energy poverty and rely on polluting energy sources. Though there are effective tools in existence to prevent, diagnose and treat pneumonia, pneumonia-related deaths are prevalent in most of the countries. But only a small amount of health budgets is allocated to eradicate pneumonia. If the diagnosis of the disease is made in more reliable and cost effective way, tackling the disease won’t be a herculean task. Machine learning algorithms paved a great way to easily identify, diagnose and predict the disease with minimal amount of time. This paper represents the identification of pneumonia from chest X-Ray by implementing traditional machine learning algorithms with ensemble using optimal number of image features with the help of correlation co-efficient. Also deep learning approach has been implemented. The proposed method traditional machine learning approach and deep learning approach achieved accuracy rates of 93.57% and 93.59% and time required for pneumonia detection is 157,452 s (approx.) and 240,253 s (approx.) respectively.
{"title":"Detection of Pneumonia from Chest X-Ray images using Machine Learning","authors":"S. M., Varalakshmi Perumal, Gowtham Yuvaraj, Sakthi Jaya Sundar Rajasekar","doi":"10.1177/1063293X221106501","DOIUrl":"https://doi.org/10.1177/1063293X221106501","url":null,"abstract":"The survival percentage of lung patients can be improved if pneumonia is detected early. Images of the chest X-ray (CXR) are the most common way of identifying and diagnosing pneumonia. A competent radiologist faces a difficult problem in detecting pneumonia from CXR images. Many people are at danger of contracting pneumonia, especially in developing countries where billions of people live in energy poverty and rely on polluting energy sources. Though there are effective tools in existence to prevent, diagnose and treat pneumonia, pneumonia-related deaths are prevalent in most of the countries. But only a small amount of health budgets is allocated to eradicate pneumonia. If the diagnosis of the disease is made in more reliable and cost effective way, tackling the disease won’t be a herculean task. Machine learning algorithms paved a great way to easily identify, diagnose and predict the disease with minimal amount of time. This paper represents the identification of pneumonia from chest X-Ray by implementing traditional machine learning algorithms with ensemble using optimal number of image features with the help of correlation co-efficient. Also deep learning approach has been implemented. The proposed method traditional machine learning approach and deep learning approach achieved accuracy rates of 93.57% and 93.59% and time required for pneumonia detection is 157,452 s (approx.) and 240,253 s (approx.) respectively.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"57 1","pages":"325 - 334"},"PeriodicalIF":0.0,"publicationDate":"2022-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80022442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1177/1063293X221089086
A. Vidhya, P. M. Kumar
Every organization in this digital age is expected to exponentially increase its digital data due to generations from machines. The advanced computations of Big Data are now showing various opportunities for the researchers who work on security enhancements to ensure the efficient accessibility of the data stores. Our research work aims to derive a Fusion-based Advanced Encryption Algorithm (FAEA) for a cost-optimized satisfiable security model toward the usage of Big Data in the cloud. The FAEA method is evaluated for its performance toward efficiency, scalability, and security and proved to be 98% ahead of the existing methods of Security Hadoop Distributed File System Sec (HDFS) and Map Reduce Encryption Scheme (MRE). On the other hand, this work aims to address the problems of usage of Big Data in the cloud toward the sole solution, cost-effective solutioning, and proof of ownership. The outcome analysis of FAEA revolves around addressing these three major problems. This research work would be much helpful for the IT industries to manage Big Data in Cloud with security aspects for the decade.
{"title":"Fusion-based advanced encryption algorithm for enhancing the security of Big Data in Cloud","authors":"A. Vidhya, P. M. Kumar","doi":"10.1177/1063293X221089086","DOIUrl":"https://doi.org/10.1177/1063293X221089086","url":null,"abstract":"Every organization in this digital age is expected to exponentially increase its digital data due to generations from machines. The advanced computations of Big Data are now showing various opportunities for the researchers who work on security enhancements to ensure the efficient accessibility of the data stores. Our research work aims to derive a Fusion-based Advanced Encryption Algorithm (FAEA) for a cost-optimized satisfiable security model toward the usage of Big Data in the cloud. The FAEA method is evaluated for its performance toward efficiency, scalability, and security and proved to be 98% ahead of the existing methods of Security Hadoop Distributed File System Sec (HDFS) and Map Reduce Encryption Scheme (MRE). On the other hand, this work aims to address the problems of usage of Big Data in the cloud toward the sole solution, cost-effective solutioning, and proof of ownership. The outcome analysis of FAEA revolves around addressing these three major problems. This research work would be much helpful for the IT industries to manage Big Data in Cloud with security aspects for the decade.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"81 1","pages":"171 - 180"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86532336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-06-01DOI: 10.1177/1063293X221108831
K. Vijayakumar
In the past few years, Science has played an impressive role in providing solutions to various real-life problems. The current growth in the domain of science, technology and computing has helped the human community to live life with a better ambience. The enhanced occupation helps humans, access a wide variety of recent facilities, which further helps to enhance their lifestyle and their work atmosphere. One of the major contributors to this enhancement is Concurrent Engineering (CE), which focuses on time optimization, all the while maintaining the quality of a developing product. Thus, it provides optimal solutions to challenges faced in our day-to-day life. Concurrent Engineering is implemented through CAD, Resource Management, Digital simulation and Process planning along with improved efficiency and flexibility. Likewise, Machine Learning (ML) is also another domain which plays a crucial function in improving the lifestyle of human community. The ML algorithms and methodologies allow the development of models by systems, to learn and train from input datasets, and generate results based on the provided inputs. The implementation of the same improves efficiency, productivity and decisionmaking capabilities. When ML methodologies support CE, the overall capability and accuracy, of the system is powered up. Thus, it helps humankind to improve the current facilities and Technologies.
{"title":"Machine Learning and Automation in Concurrent Engineering","authors":"K. Vijayakumar","doi":"10.1177/1063293X221108831","DOIUrl":"https://doi.org/10.1177/1063293X221108831","url":null,"abstract":"In the past few years, Science has played an impressive role in providing solutions to various real-life problems. The current growth in the domain of science, technology and computing has helped the human community to live life with a better ambience. The enhanced occupation helps humans, access a wide variety of recent facilities, which further helps to enhance their lifestyle and their work atmosphere. One of the major contributors to this enhancement is Concurrent Engineering (CE), which focuses on time optimization, all the while maintaining the quality of a developing product. Thus, it provides optimal solutions to challenges faced in our day-to-day life. Concurrent Engineering is implemented through CAD, Resource Management, Digital simulation and Process planning along with improved efficiency and flexibility. Likewise, Machine Learning (ML) is also another domain which plays a crucial function in improving the lifestyle of human community. The ML algorithms and methodologies allow the development of models by systems, to learn and train from input datasets, and generate results based on the provided inputs. The implementation of the same improves efficiency, productivity and decisionmaking capabilities. When ML methodologies support CE, the overall capability and accuracy, of the system is powered up. Thus, it helps humankind to improve the current facilities and Technologies.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"30 1","pages":"133 - 134"},"PeriodicalIF":0.0,"publicationDate":"2022-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82296895","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-26DOI: 10.1177/1063293X221101358
P. Shamili, B. Muruganantham
Federation Payment Tree, a new Off-chain with zero-knowledge hash time lock commitment setup is proposed in this paper. The security of blockchain is based on consensus protocols that delay when number of concurrent transactions processed in given throughput framework. The scalability of blockchain is the ability to perform support increasing workload transaction. The FP-Tree provides zero knowledge hash lock commitment connect with off-chain protocols by using the payment channel, which enables execution of off-chain protocol that allows interaction between the parties without involving the consensus protocol. It allows to make payment across an authorization path of payment channel. Such a payment tree requires two commitment scheme is T i m e l o c k and F u n d l o c k , each party lock fund for a time period. The main challenges we faced in this paper is that the computational power, storage and cryptography. Furthermore, we discussed many attacks on off-chain payment channel that allows a malicious adversary to make fund lose. The FP-Tree supports multi-parti computation (MPC) merging transactions into single hash value in payment tree. We enable the parties to generate single hash value by consumes both less than 0 ( l o g 2 N ) and space less than 0 ( l o g 2 N ) time combine element over length of single hash. The results were discussed in this paper and efficiency of FP-Tree is well suited for the blockchain technology. We achieved the accuracy of 60.2% in federated payment tree when compared with the proof of work and proof of authority.
提出了一种新的零知识散列时间锁承诺机制——联邦支付树。区块链的安全性基于共识协议,该协议在给定吞吐量框架下处理并发事务数量时会延迟。区块链的可扩展性是支持不断增加的工作负载事务的能力。FP-Tree通过使用支付通道提供零知识哈希锁承诺与链下协议连接,这使得链下协议的执行允许各方之间的交互而不涉及共识协议。它允许跨支付通道的授权路径进行支付。这样的支付树需要两种承诺方案,即T - i - m - i - m - i - o - k和F - i - m - i - o - k,各方锁定资金一段时间。我们在本文中面临的主要挑战是计算能力,存储和加密。此外,我们讨论了许多对链下支付渠道的攻击,这些攻击允许恶意对手造成资金损失。FP-Tree支持多方计算(MPC),将交易合并为支付树中的单个哈希值。我们使各方能够通过消耗小于0 (l o g 2n)和空间小于0 (l o g 2n)的时间组合元素在单个哈希长度上生成单个哈希值。本文对结果进行了讨论,认为FP-Tree的效率非常适合区块链技术。与工作量证明和权限证明相比,我们在联邦支付树中实现了60.2%的准确率。
{"title":"Federation payment tree: An improved payment channel for scaling and efficient ZK-hash time lock commitment framework in blockchain technology","authors":"P. Shamili, B. Muruganantham","doi":"10.1177/1063293X221101358","DOIUrl":"https://doi.org/10.1177/1063293X221101358","url":null,"abstract":"Federation Payment Tree, a new Off-chain with zero-knowledge hash time lock commitment setup is proposed in this paper. The security of blockchain is based on consensus protocols that delay when number of concurrent transactions processed in given throughput framework. The scalability of blockchain is the ability to perform support increasing workload transaction. The FP-Tree provides zero knowledge hash lock commitment connect with off-chain protocols by using the payment channel, which enables execution of off-chain protocol that allows interaction between the parties without involving the consensus protocol. It allows to make payment across an authorization path of payment channel. Such a payment tree requires two commitment scheme is T i m e l o c k and F u n d l o c k , each party lock fund for a time period. The main challenges we faced in this paper is that the computational power, storage and cryptography. Furthermore, we discussed many attacks on off-chain payment channel that allows a malicious adversary to make fund lose. The FP-Tree supports multi-parti computation (MPC) merging transactions into single hash value in payment tree. We enable the parties to generate single hash value by consumes both less than 0 ( l o g 2 N ) and space less than 0 ( l o g 2 N ) time combine element over length of single hash. The results were discussed in this paper and efficiency of FP-Tree is well suited for the blockchain technology. We achieved the accuracy of 60.2% in federated payment tree when compared with the proof of work and proof of authority.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"5 1","pages":"317 - 324"},"PeriodicalIF":0.0,"publicationDate":"2022-05-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88600832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-19DOI: 10.1177/1063293X221094345
J. Indumathi, V. Kaliraj
An Intelligent Transport System (ITS) model that is contingent on the compulsion and expertise of the Traffic Prediction System in the contemporary urban context is proposed in this paper. Deep Learning (DL) is computationally becoming comfortable to train and set as many hyperparameters automatically as possible. The researchers and practitioners crave to set as many hyperparameters inevitably as possible in the DL. To be a great enabler, ITS has to find suitable solutions to issues like—alert on live time traffic information to interested parties along with facility to retrieve on demand the long-term statistical data, reduce the middling waiting time for commuters, offer protected, consistent, value-added services, control with vitality the signal timing based on the traffic flow etc., All these limitations call for instant attention. Among all the listed issues the problems like the sharp nonlinearities due to changeovers amid free flow, breakdown, retrieval and congestion. The contributions in this paper are as follows: (i) Adopt an ascendable approach to kindle the scarce information formed; (ii) Exploit the attention mechanism to exterminate the disadvantages of Long Short-Term Memory (LSTM) methods for traffic prediction; (iii) Suggest a new fusion smoothing model; (iv) Investigating, developing, and utilizing the Bayesian contextual bandits; (v) Recommend a Linear model based on LSTM, in combo with Bayesian contextual bandits. The travel speed prediction is done by LSTM. The results authenticate that the proposed model can adeptly achieve the goal of developing a system. The proposed model is definitely the best solution to overcome the issues.
{"title":"Petite term traffic flow prediction using deep learning for augmented flow of vehicles","authors":"J. Indumathi, V. Kaliraj","doi":"10.1177/1063293X221094345","DOIUrl":"https://doi.org/10.1177/1063293X221094345","url":null,"abstract":"An Intelligent Transport System (ITS) model that is contingent on the compulsion and expertise of the Traffic Prediction System in the contemporary urban context is proposed in this paper. Deep Learning (DL) is computationally becoming comfortable to train and set as many hyperparameters automatically as possible. The researchers and practitioners crave to set as many hyperparameters inevitably as possible in the DL. To be a great enabler, ITS has to find suitable solutions to issues like—alert on live time traffic information to interested parties along with facility to retrieve on demand the long-term statistical data, reduce the middling waiting time for commuters, offer protected, consistent, value-added services, control with vitality the signal timing based on the traffic flow etc., All these limitations call for instant attention. Among all the listed issues the problems like the sharp nonlinearities due to changeovers amid free flow, breakdown, retrieval and congestion. The contributions in this paper are as follows: (i) Adopt an ascendable approach to kindle the scarce information formed; (ii) Exploit the attention mechanism to exterminate the disadvantages of Long Short-Term Memory (LSTM) methods for traffic prediction; (iii) Suggest a new fusion smoothing model; (iv) Investigating, developing, and utilizing the Bayesian contextual bandits; (v) Recommend a Linear model based on LSTM, in combo with Bayesian contextual bandits. The travel speed prediction is done by LSTM. The results authenticate that the proposed model can adeptly achieve the goal of developing a system. The proposed model is definitely the best solution to overcome the issues.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"19 1","pages":"214 - 224"},"PeriodicalIF":0.0,"publicationDate":"2022-05-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84303264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-09DOI: 10.1177/1063293X221097447
V. Nanammal, Venu Gopala Krishnan Jayagopalan
Now-a-days, the medical industry is growing a lot with the adaptation of latest technologies as well as the logical evaluation and security norms provides a robust platform to enhance the effectiveness of the industry at a drastic level. In this paper, a digital bio-medical image processing based Pneumonia disease identification system is introduced with enhanced security features. Due to improving the efficiency of the application, a well-known watermarking based security constraint is included to provide the protection to the respective hospital environment and patients as well. To avoid these issues, some sort of security aspects need to be followed so that this paper included watermarking based security to provide a rich level of protection to the images going to be tested. The main intention of this paper is to introduce a novel security enabled digital image processing scheme to identify the Pneumonic disease in earlier stages with respect to the proper classification principles. In this paper, a novel deep learning algorithm is introduced called enhanced Dynamic Learning Neural Network in which it is a hybrid algorithm with the combinations of conventional DLNN algorithm and the Support Vector Classification algorithm. This proposed approach effectively identifies the Pneumonia disease in earlier stages but the security inspection on the testing stage is so important to analyze the disease. The respective testing image is properly watermarked with the logo of the corresponding hospital; the image is processed otherwise the proposed approach skips the image to process. These kinds of security features emphasize the medical industry and boost up the levels more as well as the patients can get an appropriate error free care with the help of such technology. A proper Chest X-Ray based Kaggle dataset is considered to process the system as well as which contains 5856 Chest X-Ray images under two different categories such as Pneumonia and Normal. With respect to processing these images and identifying the Pneumonia disease effectively as well as the proposed watermarking enabled security features provide a good impact in the medical field protection system. The resulting section provides the proper proof to the effectiveness of the proposed approach and its prediction efficiency.
{"title":"A secured biomedical image processing scheme to detect pneumonia disease using dynamic learning principles","authors":"V. Nanammal, Venu Gopala Krishnan Jayagopalan","doi":"10.1177/1063293X221097447","DOIUrl":"https://doi.org/10.1177/1063293X221097447","url":null,"abstract":"Now-a-days, the medical industry is growing a lot with the adaptation of latest technologies as well as the logical evaluation and security norms provides a robust platform to enhance the effectiveness of the industry at a drastic level. In this paper, a digital bio-medical image processing based Pneumonia disease identification system is introduced with enhanced security features. Due to improving the efficiency of the application, a well-known watermarking based security constraint is included to provide the protection to the respective hospital environment and patients as well. To avoid these issues, some sort of security aspects need to be followed so that this paper included watermarking based security to provide a rich level of protection to the images going to be tested. The main intention of this paper is to introduce a novel security enabled digital image processing scheme to identify the Pneumonic disease in earlier stages with respect to the proper classification principles. In this paper, a novel deep learning algorithm is introduced called enhanced Dynamic Learning Neural Network in which it is a hybrid algorithm with the combinations of conventional DLNN algorithm and the Support Vector Classification algorithm. This proposed approach effectively identifies the Pneumonia disease in earlier stages but the security inspection on the testing stage is so important to analyze the disease. The respective testing image is properly watermarked with the logo of the corresponding hospital; the image is processed otherwise the proposed approach skips the image to process. These kinds of security features emphasize the medical industry and boost up the levels more as well as the patients can get an appropriate error free care with the help of such technology. A proper Chest X-Ray based Kaggle dataset is considered to process the system as well as which contains 5856 Chest X-Ray images under two different categories such as Pneumonia and Normal. With respect to processing these images and identifying the Pneumonia disease effectively as well as the proposed watermarking enabled security features provide a good impact in the medical field protection system. The resulting section provides the proper proof to the effectiveness of the proposed approach and its prediction efficiency.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"55 1","pages":"245 - 252"},"PeriodicalIF":0.0,"publicationDate":"2022-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79287229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-23DOI: 10.1177/1063293X221081543
N. Umasankari, B. Muthukumar
The Intelligent Computing area such as Automatic Biometric authentication is an emerging and high priority research work where the researchers invent several biometric applications which result in the revolutionary development in the recent era. In this approach, a novel algorithm is known as Modified AntLion Optimization (MALO) with Multi Kernel Support Vector Machine (MKSVM) was used to classify and recognize the fingerprint, and retina image efficiently. In the early stage of this research, the pre-processing of the biometric images was done for contrast enhancement and it was implemented by histogram equalization technique. Next, features were extracted by Gray Level Co-occurrence Matrix (GLCM), minutiae, Gray Level Run Length Matrix (GLRLM), and Autocorrelation methods. Then the features extracted were reduced by Probabilistic Principal Component Analysis (PPCA) method. Then the feature selection method was employed and the optimal features were attained by applying the Modified AntLion Optimization (MALO) technique. Finally, the machine learning classification technique was executed for categorizing biometric recognition. Here, the machine learning classification technique named Multi Kernel Support Vector Machine (MKSVM) has been used. The performance of the proposed algorithm was analyzed in terms of accuracy, sensitivity, and specificity. Results indicate that the Multi Kernel Support Vector Machine (MKSVM) yields the best accuracy of 91.60% and 90.30% for fingerprint and retina image recognition respectively, yields the sensitivity of 84.70% and 89.41% for fingerprint and retina image recognition, respectively, yields the specificity of 91.30% and 92.70% for fingerprint and retina image recognition respectively.
生物识别自动认证等智能计算领域是近年来新兴的、优先发展的研究领域,生物识别技术在这一领域的应用得到了革命性的发展。该方法采用基于多核支持向量机(MKSVM)的改进AntLion优化算法(MALO)对指纹和视网膜图像进行有效分类和识别。在本研究的前期,对生物特征图像进行预处理以增强对比度,并采用直方图均衡化技术实现。其次,采用灰度共生矩阵(GLCM)、细部特征、灰度运行长度矩阵(GLRLM)和自相关方法提取特征;然后用概率主成分分析(PPCA)方法对提取的特征进行约简。然后采用特征选择方法,利用改进的AntLion优化(MALO)技术获得最优特征;最后,运用机器学习分类技术对生物特征识别进行分类。在这里,机器学习分类技术被称为多核支持向量机(MKSVM)。从准确性、灵敏度和特异性三个方面分析了该算法的性能。结果表明,多核支持向量机(Multi Kernel Support Vector Machine, MKSVM)对指纹和视网膜图像识别的准确率分别为91.60%和90.30%,对指纹和视网膜图像识别的灵敏度分别为84.70%和89.41%,对指纹和视网膜图像识别的特异性分别为91.30%和92.70%。
{"title":"Optimal feature reduction for biometric authentication using intelligent computing techniques","authors":"N. Umasankari, B. Muthukumar","doi":"10.1177/1063293X221081543","DOIUrl":"https://doi.org/10.1177/1063293X221081543","url":null,"abstract":"The Intelligent Computing area such as Automatic Biometric authentication is an emerging and high priority research work where the researchers invent several biometric applications which result in the revolutionary development in the recent era. In this approach, a novel algorithm is known as Modified AntLion Optimization (MALO) with Multi Kernel Support Vector Machine (MKSVM) was used to classify and recognize the fingerprint, and retina image efficiently. In the early stage of this research, the pre-processing of the biometric images was done for contrast enhancement and it was implemented by histogram equalization technique. Next, features were extracted by Gray Level Co-occurrence Matrix (GLCM), minutiae, Gray Level Run Length Matrix (GLRLM), and Autocorrelation methods. Then the features extracted were reduced by Probabilistic Principal Component Analysis (PPCA) method. Then the feature selection method was employed and the optimal features were attained by applying the Modified AntLion Optimization (MALO) technique. Finally, the machine learning classification technique was executed for categorizing biometric recognition. Here, the machine learning classification technique named Multi Kernel Support Vector Machine (MKSVM) has been used. The performance of the proposed algorithm was analyzed in terms of accuracy, sensitivity, and specificity. Results indicate that the Multi Kernel Support Vector Machine (MKSVM) yields the best accuracy of 91.60% and 90.30% for fingerprint and retina image recognition respectively, yields the sensitivity of 84.70% and 89.41% for fingerprint and retina image recognition, respectively, yields the specificity of 91.30% and 92.70% for fingerprint and retina image recognition respectively.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"81 1","pages":"237 - 244"},"PeriodicalIF":0.0,"publicationDate":"2022-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83119903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}