Pub Date : 2021-07-20DOI: 10.1177/1063293X211031485
C. Cyril, J. Beulah, Neelakandan Subramani, Prakash Mohan, A. Harshavardhan, D. Sivabalaselvamani
The modern society runs over the social media for their most time of every day. The web users spend their most time in social media and they share many details with their friends. Such information obtained from their chat has been used in several applications. The sentiment analysis is the one which has been applied with Twitter data set toward identifying the emotion of any user and based on those different problems can be solved. Primarily, the data as of the Twitter database is preprocessed. In this step, tokenization, stemming, stop word removal, and number removal are done. The proposed automated learning with CA-SVM based sentiment analysis model reads the Twitter data set. After that they have been processed to extract the features which yield set of terms. Using the terms, the tweets are clustered using TGS-K means clustering which measures Euclidean distance according to different features like semantic sentiment score (SSS), gazetteer and symbolic sentiment support (GSSS), and topical sentiment score (TSS). Further, the method classifies the tweets according to support vector machine (CA-SVM) which classifies the tweet according to the support value which is measured based on the above two measures. The attained results are validated utilizing k-fold cross-validation methodology. Then, the classification is performed by utilizing the Balanced CA-SVM (Deep Learning Modified Neural Network). The results are evaluated and compared with the existing works. The Proposed model achieved 92.48 % accuracy and 92.05% sentiment score contrasted with the existing works.
{"title":"An automated learning model for sentiment analysis and data classification of Twitter data using balanced CA-SVM","authors":"C. Cyril, J. Beulah, Neelakandan Subramani, Prakash Mohan, A. Harshavardhan, D. Sivabalaselvamani","doi":"10.1177/1063293X211031485","DOIUrl":"https://doi.org/10.1177/1063293X211031485","url":null,"abstract":"The modern society runs over the social media for their most time of every day. The web users spend their most time in social media and they share many details with their friends. Such information obtained from their chat has been used in several applications. The sentiment analysis is the one which has been applied with Twitter data set toward identifying the emotion of any user and based on those different problems can be solved. Primarily, the data as of the Twitter database is preprocessed. In this step, tokenization, stemming, stop word removal, and number removal are done. The proposed automated learning with CA-SVM based sentiment analysis model reads the Twitter data set. After that they have been processed to extract the features which yield set of terms. Using the terms, the tweets are clustered using TGS-K means clustering which measures Euclidean distance according to different features like semantic sentiment score (SSS), gazetteer and symbolic sentiment support (GSSS), and topical sentiment score (TSS). Further, the method classifies the tweets according to support vector machine (CA-SVM) which classifies the tweet according to the support value which is measured based on the above two measures. The attained results are validated utilizing k-fold cross-validation methodology. Then, the classification is performed by utilizing the Balanced CA-SVM (Deep Learning Modified Neural Network). The results are evaluated and compared with the existing works. The Proposed model achieved 92.48 % accuracy and 92.05% sentiment score contrasted with the existing works.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"31 1","pages":"386 - 395"},"PeriodicalIF":0.0,"publicationDate":"2021-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89441587","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-09DOI: 10.1177/1063293X211026620
Sibghatullah I. Khan, S. Choubey, A. Choubey, Abhishek Bhatt, Pandya Vyomal Naishadhkumar, M. M. Basha
Glaucoma is a domineering and irretrievable neurodegenerative eye disease produced by the optical nerve head owed to extended intra-ocular stress inside the eye. Recognition of glaucoma is an essential job for ophthalmologists. In this paper, we propose a methodology to classify fundus images into normal and glaucoma categories. The proposed approach makes use of image denoising of digital fundus images by utilizing a non-Gaussian bivariate probability distribution function to model the statistics of wavelet coefficients of glaucoma images. The traditional image features were extracted followed by the popular feature selection algorithm. The selected features are then fed to the least square support vector machine classifier employing various kernel functions. The comparison result shows that the proposed approach offers maximum classification accuracy of nearly 91.22% over the existing best approaches.
{"title":"Automated glaucoma detection from fundus images using wavelet-based denoising and machine learning","authors":"Sibghatullah I. Khan, S. Choubey, A. Choubey, Abhishek Bhatt, Pandya Vyomal Naishadhkumar, M. M. Basha","doi":"10.1177/1063293X211026620","DOIUrl":"https://doi.org/10.1177/1063293X211026620","url":null,"abstract":"Glaucoma is a domineering and irretrievable neurodegenerative eye disease produced by the optical nerve head owed to extended intra-ocular stress inside the eye. Recognition of glaucoma is an essential job for ophthalmologists. In this paper, we propose a methodology to classify fundus images into normal and glaucoma categories. The proposed approach makes use of image denoising of digital fundus images by utilizing a non-Gaussian bivariate probability distribution function to model the statistics of wavelet coefficients of glaucoma images. The traditional image features were extracted followed by the popular feature selection algorithm. The selected features are then fed to the least square support vector machine classifier employing various kernel functions. The comparison result shows that the proposed approach offers maximum classification accuracy of nearly 91.22% over the existing best approaches.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"78 1","pages":"103 - 115"},"PeriodicalIF":0.0,"publicationDate":"2021-07-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80831565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-08DOI: 10.1177/1063293X211015236
Qing Yang, Yingxin Bi, Qinru Wang, Tao Yao
Software development projects have undergone remarkable changes with the arrival of agile development approaches. Many firms are facing a need to use these approaches to manage entities consisting of multiple projects (i.e. programs) simultaneously and efficiently. New technologies such as big data provide a huge power and rich demand for the IT application system of the commercial bank which has the characteristics of multiple sub-projects, strong inter-project correlation, and numerous project participating teams. Hence, taking the IT program management of a bank in China as a case, we explore the methods to solve the problems in multi-project concurrent development practice through integrating the ideas of program and batch management. First, to coordinate the multi-project development process, this paper presents the batch-based agile program management approach that synthesizes concurrent engineering with agile methods. And we compare the application of batch management between software development projects and manufacturing process. Further, we analyze the concurrent multi-project development practice in the batch-based agile program management, including the overlapping between stages, individual project’s activities, and multiple projects based on common resources and environment to stimulate the knowledge transfer. Third, to facilitate the communication and coordination of batch-based program management, we present the double-level responsibility organizational structure of batch management.
{"title":"Batch-based agile program management approach for coordinating IT multi-project concurrent development","authors":"Qing Yang, Yingxin Bi, Qinru Wang, Tao Yao","doi":"10.1177/1063293X211015236","DOIUrl":"https://doi.org/10.1177/1063293X211015236","url":null,"abstract":"Software development projects have undergone remarkable changes with the arrival of agile development approaches. Many firms are facing a need to use these approaches to manage entities consisting of multiple projects (i.e. programs) simultaneously and efficiently. New technologies such as big data provide a huge power and rich demand for the IT application system of the commercial bank which has the characteristics of multiple sub-projects, strong inter-project correlation, and numerous project participating teams. Hence, taking the IT program management of a bank in China as a case, we explore the methods to solve the problems in multi-project concurrent development practice through integrating the ideas of program and batch management. First, to coordinate the multi-project development process, this paper presents the batch-based agile program management approach that synthesizes concurrent engineering with agile methods. And we compare the application of batch management between software development projects and manufacturing process. Further, we analyze the concurrent multi-project development practice in the batch-based agile program management, including the overlapping between stages, individual project’s activities, and multiple projects based on common resources and environment to stimulate the knowledge transfer. Third, to facilitate the communication and coordination of batch-based program management, we present the double-level responsibility organizational structure of batch management.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"20 1","pages":"343 - 355"},"PeriodicalIF":0.0,"publicationDate":"2021-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89084677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-03DOI: 10.1177/1063293X211019503
Ying Yu, Shan Li, Jing Ma
Selecting the most efficient from several functionally equivalent services remains an ongoing challenge. Most manufacturing service selection methods regard static quality of service (QoS) as a major competitiveness factor. However, adaptations are difficult to achieve when variable network environment has significant impact on QoS performance stabilization in complex task processes. Therefore, dynamic temporal QoS values rather than fixed values are gaining ground for service evaluation. User preferences play an important role when service demanders select personalized services, and this aspect has been poorly investigated for temporal QoS-aware cloud manufacturing (CMfg) service selection methods. Furthermore, it is impractical to acquire all temporal QoS values, which affects evaluation validity. Therefore, this paper proposes a time-aware CMfg service selection approach to address these issues. The proposed approach first develops an unknown-QoS prediction model by utilizing similarity features from temporal QoS values. The model considers QoS attributes and service candidates integrally, helping to predict multidimensional QoS values accurately and easily. Overall QoS is then evaluated using a proposed temporal QoS measuring algorithm which can self-adapt to user preferences. Specifically, we employ the temporal QoS conflict feature to overcome one-sided user preferences, which has been largely overlooked previously. Experimental results confirmed that the proposed approach outperformed classical time series prediction methods, and can also find better service by reducing user preference misjudgments.
{"title":"Time-aware cloud manufacturing service selection using unknown QoS prediction and uncertain user preferences","authors":"Ying Yu, Shan Li, Jing Ma","doi":"10.1177/1063293X211019503","DOIUrl":"https://doi.org/10.1177/1063293X211019503","url":null,"abstract":"Selecting the most efficient from several functionally equivalent services remains an ongoing challenge. Most manufacturing service selection methods regard static quality of service (QoS) as a major competitiveness factor. However, adaptations are difficult to achieve when variable network environment has significant impact on QoS performance stabilization in complex task processes. Therefore, dynamic temporal QoS values rather than fixed values are gaining ground for service evaluation. User preferences play an important role when service demanders select personalized services, and this aspect has been poorly investigated for temporal QoS-aware cloud manufacturing (CMfg) service selection methods. Furthermore, it is impractical to acquire all temporal QoS values, which affects evaluation validity. Therefore, this paper proposes a time-aware CMfg service selection approach to address these issues. The proposed approach first develops an unknown-QoS prediction model by utilizing similarity features from temporal QoS values. The model considers QoS attributes and service candidates integrally, helping to predict multidimensional QoS values accurately and easily. Overall QoS is then evaluated using a proposed temporal QoS measuring algorithm which can self-adapt to user preferences. Specifically, we employ the temporal QoS conflict feature to overcome one-sided user preferences, which has been largely overlooked previously. Experimental results confirmed that the proposed approach outperformed classical time series prediction methods, and can also find better service by reducing user preference misjudgments.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"126 1","pages":"370 - 385"},"PeriodicalIF":0.0,"publicationDate":"2021-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75578861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-31DOI: 10.1177/1063293X211016046
Dinesh Morkonda Gunasekaran, Prabha Dhandayudam
Nowadays women are commonly diagnosed with breast cancer. Feature based Selection method plays an important step while constructing a classification based framework. We have proposed Multi filter union (MFU) feature selection method for breast cancer data set. The feature selection process based on random forest algorithm and Logistic regression (LG) algorithm based union model is used for selecting important features in the dataset. The performance of the data analysis is evaluated using optimal features subset from selected dataset. The experiments are computed with data set of Wisconsin diagnostic breast cancer center and next the real data set from women health care center. The result of the proposed approach shows high performance and efficient when comparing with existing feature selection algorithms.
{"title":"Design of novel multi filter union feature selection framework for breast cancer dataset","authors":"Dinesh Morkonda Gunasekaran, Prabha Dhandayudam","doi":"10.1177/1063293X211016046","DOIUrl":"https://doi.org/10.1177/1063293X211016046","url":null,"abstract":"Nowadays women are commonly diagnosed with breast cancer. Feature based Selection method plays an important step while constructing a classification based framework. We have proposed Multi filter union (MFU) feature selection method for breast cancer data set. The feature selection process based on random forest algorithm and Logistic regression (LG) algorithm based union model is used for selecting important features in the dataset. The performance of the data analysis is evaluated using optimal features subset from selected dataset. The experiments are computed with data set of Wisconsin diagnostic breast cancer center and next the real data set from women health care center. The result of the proposed approach shows high performance and efficient when comparing with existing feature selection algorithms.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"92 1","pages":"285 - 290"},"PeriodicalIF":0.0,"publicationDate":"2021-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76805764","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The prioritization of the failure risks of the components in an existing product is critical for product redesign decision-making considering various uncertainties. Two issues need to be addressed in the failure risk prioritization process. One is the evaluation of the failure risk considering each failure mode for each component. Currently, many failure mode effects and analysis (FMEA) methods based on fuzzy logic seldom deal with the randomness in failure mode occurrence during the product operation stage. Therefore, in this research, the information axiom is extended to calculate the information contents of risk indices considering these two types of uncertainty. The second issue is the evaluation of the degree of failure risk for each of the components. The weighted sum of information content considering all failure modes is used to assess the risk of components based on a fuzzy logarithmic least squares method (FLLSM). Additionally, a case study to prioritize the failure risks of various components in a crawler crane is implemented to demonstrate the effectiveness of the developed approach.
{"title":"Prioritizing failure risks of components based on information axiom for product redesign considering fuzzy and random uncertainties","authors":"Zhenhua Liu, Xuening Chu, Hongzhan Ma, Mengting Zhang","doi":"10.1177/1063293X211015999","DOIUrl":"https://doi.org/10.1177/1063293X211015999","url":null,"abstract":"The prioritization of the failure risks of the components in an existing product is critical for product redesign decision-making considering various uncertainties. Two issues need to be addressed in the failure risk prioritization process. One is the evaluation of the failure risk considering each failure mode for each component. Currently, many failure mode effects and analysis (FMEA) methods based on fuzzy logic seldom deal with the randomness in failure mode occurrence during the product operation stage. Therefore, in this research, the information axiom is extended to calculate the information contents of risk indices considering these two types of uncertainty. The second issue is the evaluation of the degree of failure risk for each of the components. The weighted sum of information content considering all failure modes is used to assess the risk of components based on a fuzzy logarithmic least squares method (FLLSM). Additionally, a case study to prioritize the failure risks of various components in a crawler crane is implemented to demonstrate the effectiveness of the developed approach.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"63 1","pages":"356 - 369"},"PeriodicalIF":0.0,"publicationDate":"2021-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91041327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-03DOI: 10.1177/1063293X20987910
Yu-ling Jiao, Han Jin, Xiao-cui Xing, Ming-juan Li, Xinran Liu
With the continuous upgrading of the manufacturing system, the assembly line balancing problem (ALBP) is gradually complicated, and the researches are constantly deepened in the application theory and solution methods. In order to clarify the research direction and development status of assembly line balancing, 89 articles are read and studied. We classify ALBPs to construct the network structure of research from horizontal classification and vertical thinking. The ALBP framework is horizontally given according to the number of models (i.e. the number of products), the layout shape of the assembly line, and the data of task time. The “seven steps for scientific paper” is vertically proposed according to the research steps to comb the research path of scientific and technological literature. The horizontal and vertical extension crosses and constructs the network structure of the ALBP. Any horizontal problem intersects with any step of the vertical “seven steps for scientific paper” to form a research point. We analyze 89 articles according to the development path from the straight line to U-shaped line and then to two-sided U-shaped/parallel U-shaped assembly line, summarize the research algorithm of assembly line balance and count the number of articles, and point out the latest research direction and algorithm development trend of assembly line balance.
{"title":"Assembly line balance research methods, literature and development review","authors":"Yu-ling Jiao, Han Jin, Xiao-cui Xing, Ming-juan Li, Xinran Liu","doi":"10.1177/1063293X20987910","DOIUrl":"https://doi.org/10.1177/1063293X20987910","url":null,"abstract":"With the continuous upgrading of the manufacturing system, the assembly line balancing problem (ALBP) is gradually complicated, and the researches are constantly deepened in the application theory and solution methods. In order to clarify the research direction and development status of assembly line balancing, 89 articles are read and studied. We classify ALBPs to construct the network structure of research from horizontal classification and vertical thinking. The ALBP framework is horizontally given according to the number of models (i.e. the number of products), the layout shape of the assembly line, and the data of task time. The “seven steps for scientific paper” is vertically proposed according to the research steps to comb the research path of scientific and technological literature. The horizontal and vertical extension crosses and constructs the network structure of the ALBP. Any horizontal problem intersects with any step of the vertical “seven steps for scientific paper” to form a research point. We analyze 89 articles according to the development path from the straight line to U-shaped line and then to two-sided U-shaped/parallel U-shaped assembly line, summarize the research algorithm of assembly line balance and count the number of articles, and point out the latest research direction and algorithm development trend of assembly line balance.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"3 1","pages":"183 - 194"},"PeriodicalIF":0.0,"publicationDate":"2021-05-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83388180","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-27DOI: 10.1177/1063293X211010542
Meenal Thayumanavan, Asokan Ramasamy
Nowadays, the most demanding and time consuming task in medical image processing is Brain tumor segmentation and detection. Magnetic Resonance Imaging (MRI) is employed for creating a picture of any part in a body. MRI provides a competent quick manner for analyzing tumor in the brain. This proposed framework contains different stages for classifying tumor like Preprocessing, Feature extraction, Classification, and Segmentation. Initially, T1-weighted magnetic resonance brain images are considered as an input for computational purpose. Median filter is proposed to optimize the skull stripping in MRI images. Abnormal brain tissues are extracted in low contrast, in addition to meticulous location of edges of affected tissue can be detected. Then, Discrete Wavelet Transform (DWT) and Histogram of Oriented Gradients (HOG) are performing feature extraction process. HOG is used for extracting the features like texture and shape. Then, Classification is performed through Machine learning categorization techniques via Random Forest Classifier (RFC), Support Vector Machine (SVM), and Decision Tree (DT). These classifiers classify the brain image as either normal or abnormal and the performance is analyzed by various parameters such as sensitivity, specificity and accuracy.
{"title":"An efficient approach for brain tumor detection and segmentation in MR brain images using random forest classifier","authors":"Meenal Thayumanavan, Asokan Ramasamy","doi":"10.1177/1063293X211010542","DOIUrl":"https://doi.org/10.1177/1063293X211010542","url":null,"abstract":"Nowadays, the most demanding and time consuming task in medical image processing is Brain tumor segmentation and detection. Magnetic Resonance Imaging (MRI) is employed for creating a picture of any part in a body. MRI provides a competent quick manner for analyzing tumor in the brain. This proposed framework contains different stages for classifying tumor like Preprocessing, Feature extraction, Classification, and Segmentation. Initially, T1-weighted magnetic resonance brain images are considered as an input for computational purpose. Median filter is proposed to optimize the skull stripping in MRI images. Abnormal brain tissues are extracted in low contrast, in addition to meticulous location of edges of affected tissue can be detected. Then, Discrete Wavelet Transform (DWT) and Histogram of Oriented Gradients (HOG) are performing feature extraction process. HOG is used for extracting the features like texture and shape. Then, Classification is performed through Machine learning categorization techniques via Random Forest Classifier (RFC), Support Vector Machine (SVM), and Decision Tree (DT). These classifiers classify the brain image as either normal or abnormal and the performance is analyzed by various parameters such as sensitivity, specificity and accuracy.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"14 1","pages":"266 - 274"},"PeriodicalIF":0.0,"publicationDate":"2021-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88008271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-20DOI: 10.1177/1063293X211005038
Na Yang, Qing Yang, Tao Yao
The new product development (PD) project is a complex network of communications involving trust relationships among teams. Trust is prominent, influencing the team positions and organizational performance indirectly. To manage the coordination complexity in PD projects, in this paper, we build a model of mutual trust among teams and further identify core teams to optimize the PD organizational network structure. First, we identified the technical interdependency and emotional closeness that influence the transmission behavior of the tie strength in the PD organizational network. Then, we examined how the presence of a common third party in the organizational network affects the trust transferability between interdependent teams. We modelled the structural similarity based on the trust transferability. To identify the core teams, which typically have high importance as well as diverse knowledge in the organizational network, we improved the LeaderRank centrality with trust transferability related to common recipients/sources to evaluate the importance of teams and present the team attributes (i.e. expertise) diversity. To build the group around core teams, we used the core teams as the input parameter (i.e. the initial clustering seeds) of the K-means clustering algorithms. The clustering results reinforce several managerial practices in an industrial example, including how trust transferability impacts the optimal organizational network structure and how to build an organizational network structure based on core teams.
{"title":"Clustering product development project organization based on trust and core teams","authors":"Na Yang, Qing Yang, Tao Yao","doi":"10.1177/1063293X211005038","DOIUrl":"https://doi.org/10.1177/1063293X211005038","url":null,"abstract":"The new product development (PD) project is a complex network of communications involving trust relationships among teams. Trust is prominent, influencing the team positions and organizational performance indirectly. To manage the coordination complexity in PD projects, in this paper, we build a model of mutual trust among teams and further identify core teams to optimize the PD organizational network structure. First, we identified the technical interdependency and emotional closeness that influence the transmission behavior of the tie strength in the PD organizational network. Then, we examined how the presence of a common third party in the organizational network affects the trust transferability between interdependent teams. We modelled the structural similarity based on the trust transferability. To identify the core teams, which typically have high importance as well as diverse knowledge in the organizational network, we improved the LeaderRank centrality with trust transferability related to common recipients/sources to evaluate the importance of teams and present the team attributes (i.e. expertise) diversity. To build the group around core teams, we used the core teams as the input parameter (i.e. the initial clustering seeds) of the K-means clustering algorithms. The clustering results reinforce several managerial practices in an industrial example, including how trust transferability impacts the optimal organizational network structure and how to build an organizational network structure based on core teams.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"10 1","pages":"328 - 342"},"PeriodicalIF":0.0,"publicationDate":"2021-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81818776","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-16DOI: 10.1177/1063293X211000331
Christoffer Askhøj, Carsten Keinicke Fjord Christensen, N. Mortensen
Modularization is a strategy used for handling the demand for external complexity with less internal complexity, which leads to higher profits and more efficient product development processes. However, modularity is often driven in silos, not crossing into the engineering fields of mechanics, electronics, and software. Therefore, we present the MESA (Mechanics, Electronics, and Software Architecture) tool—a tool that can be used to visualize modular product architectures across mechanics, electronics, and software. The tool demonstrates how a change in one domain affects the rest and how well aligned the modularity in the different domains is. The tool has been tested in two case companies that were used for case application and has helped provide information for making key design decisions in the development of new product families.
{"title":"Cross domain modularization tool: Mechanics, electronics, and software","authors":"Christoffer Askhøj, Carsten Keinicke Fjord Christensen, N. Mortensen","doi":"10.1177/1063293X211000331","DOIUrl":"https://doi.org/10.1177/1063293X211000331","url":null,"abstract":"Modularization is a strategy used for handling the demand for external complexity with less internal complexity, which leads to higher profits and more efficient product development processes. However, modularity is often driven in silos, not crossing into the engineering fields of mechanics, electronics, and software. Therefore, we present the MESA (Mechanics, Electronics, and Software Architecture) tool—a tool that can be used to visualize modular product architectures across mechanics, electronics, and software. The tool demonstrates how a change in one domain affects the rest and how well aligned the modularity in the different domains is. The tool has been tested in two case companies that were used for case application and has helped provide information for making key design decisions in the development of new product families.","PeriodicalId":10680,"journal":{"name":"Concurrent Engineering","volume":"6 1","pages":"221 - 235"},"PeriodicalIF":0.0,"publicationDate":"2021-04-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85707346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}