Abstract The paper presents the possibility of mechanizing laser tracker measurements using a drone. Performing measurements using a laser tracker requires touching the measured surface with a probe. Usually it is done manually, even if it requires, e.g., climbing a ladder. The drone was applied as a probe carrier for the laser tracker. To measure a point, the modified drone had to land near this point. Touching the measured surface with the probe was executed using a mobile arm fixed to the drone. This solution allows performing laser tracker measurements without the need of walking or climbing difficult to access surfaces. Two consecutive experiments were performed to verify if such an approach is equally accurate as the standard one (manual measurements). Additionally, the influence of airflow generated by the drones’ propellers on a laser wavelength and the accuracy of interferometric measurements were estimated. The research proves that it is possible to mechanize laser tracker measurements using a drone. Moreover, it is demonstrated that the operating drone does not influence the laser tracker accuracy.
{"title":"A Drone as a Reflector Carrier in Laser Tracker Measurements","authors":"M. Jankowski, M. Sieniło, A. Styk","doi":"10.2478/msr-2022-0034","DOIUrl":"https://doi.org/10.2478/msr-2022-0034","url":null,"abstract":"Abstract The paper presents the possibility of mechanizing laser tracker measurements using a drone. Performing measurements using a laser tracker requires touching the measured surface with a probe. Usually it is done manually, even if it requires, e.g., climbing a ladder. The drone was applied as a probe carrier for the laser tracker. To measure a point, the modified drone had to land near this point. Touching the measured surface with the probe was executed using a mobile arm fixed to the drone. This solution allows performing laser tracker measurements without the need of walking or climbing difficult to access surfaces. Two consecutive experiments were performed to verify if such an approach is equally accurate as the standard one (manual measurements). Additionally, the influence of airflow generated by the drones’ propellers on a laser wavelength and the accuracy of interferometric measurements were estimated. The research proves that it is possible to mechanize laser tracker measurements using a drone. Moreover, it is demonstrated that the operating drone does not influence the laser tracker accuracy.","PeriodicalId":49848,"journal":{"name":"Measurement Science Review","volume":"22 1","pages":"269 - 274"},"PeriodicalIF":0.9,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41537131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The intelligent training and assessment of gymnastics movements require studying motion trajectory and reconstructing the character animation. Microsoft Kinect has been widely used due to its advantages of low price and high frame rate. However, its optical characteristics are inevitably affected by illumination and occlusion. It is necessary to reduce data noise via specific algorithms. Most of the existing research focuses on local motion but lacks consideration of the whole human skeleton. Based on the analysis of the spatial characteristics of gymnastics and the movement principle of the human body, this paper proposes a dynamic and static two-dimensional regression compensation algorithm. Firstly, the constraint characteristics of human skeleton motion were analyzed, and the maximum constraint table and Mesh Collider were established. Then, the dynamic acceleration of skeleton motion and the spatial characteristics of static limb motion were calculated based on the data of adjacent effective skeleton frames before and after the collision. Finally, using the least squares polynomial fitting to compensate and correct the lost skeleton coordinate data, it realizes the smoothness and rationality of human skeleton animation. The results of two experiments showed that the solution of the skeleton point solved the problem caused by data loss due to the Kinect optical occlusion. The data compensation time of an effective block skeleton point can reach 180 ms, with an average error of about 0.1 mm, which shows a better data compensation effect of motion data acquisition and animation reconstruction.
{"title":"Research on Skeleton Data Compensation of Gymnastics based on Dynamic and Static Two-dimensional Regression using Kinect","authors":"Gang Zhao, Hui Zan, Junhong Chen","doi":"10.2478/msr-2022-0036","DOIUrl":"https://doi.org/10.2478/msr-2022-0036","url":null,"abstract":"Abstract The intelligent training and assessment of gymnastics movements require studying motion trajectory and reconstructing the character animation. Microsoft Kinect has been widely used due to its advantages of low price and high frame rate. However, its optical characteristics are inevitably affected by illumination and occlusion. It is necessary to reduce data noise via specific algorithms. Most of the existing research focuses on local motion but lacks consideration of the whole human skeleton. Based on the analysis of the spatial characteristics of gymnastics and the movement principle of the human body, this paper proposes a dynamic and static two-dimensional regression compensation algorithm. Firstly, the constraint characteristics of human skeleton motion were analyzed, and the maximum constraint table and Mesh Collider were established. Then, the dynamic acceleration of skeleton motion and the spatial characteristics of static limb motion were calculated based on the data of adjacent effective skeleton frames before and after the collision. Finally, using the least squares polynomial fitting to compensate and correct the lost skeleton coordinate data, it realizes the smoothness and rationality of human skeleton animation. The results of two experiments showed that the solution of the skeleton point solved the problem caused by data loss due to the Kinect optical occlusion. The data compensation time of an effective block skeleton point can reach 180 ms, with an average error of about 0.1 mm, which shows a better data compensation effect of motion data acquisition and animation reconstruction.","PeriodicalId":49848,"journal":{"name":"Measurement Science Review","volume":"22 1","pages":"283 - 292"},"PeriodicalIF":0.9,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42217152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Altmimi, Aya Aws, M. Jweeg, A. Abed, O. Abdullah
Abstract The wind turbine blade design is important in obtaining an effective wind turbine. In the field of wind energy, it is essential to understand the design and parameters affecting the blades of the wind turbine in order to obtain a successful design. However, most of the parameters are dependent on each other and this makes the design of wind turbines a challenging task. This research paper used the QBlade software to analyze and optimize the behavior of the small horizontal axis wind turbine. The software applies the Blade Element Momentum Theory (BEM) to study the wind turbine blades by calculating the drag and lift coefficients which were achieved by dividing the blades into 10 ascending segments. The twist angle and chord length of the blade are optimized to get the highest performance. Among the various airfoil types, the SG-6041 airfoil type was selected to build the blade structure. The calculated power coefficient was almost 0.4, which is considered high given that it was calculated under 10 m/s average wind speed and 1-meter length blade conditions. Where all the results are logical and reasonable, the software is proven to be reliable. The paper also evaluates the wind characteristics in different locations in Iraq in order to find the most optimal promising locations in Iraq.
{"title":"An Investigation of Design and Simulation of Horizontal Axis Wind Turbine Using QBlade","authors":"A. Altmimi, Aya Aws, M. Jweeg, A. Abed, O. Abdullah","doi":"10.2478/msr-2022-0032","DOIUrl":"https://doi.org/10.2478/msr-2022-0032","url":null,"abstract":"Abstract The wind turbine blade design is important in obtaining an effective wind turbine. In the field of wind energy, it is essential to understand the design and parameters affecting the blades of the wind turbine in order to obtain a successful design. However, most of the parameters are dependent on each other and this makes the design of wind turbines a challenging task. This research paper used the QBlade software to analyze and optimize the behavior of the small horizontal axis wind turbine. The software applies the Blade Element Momentum Theory (BEM) to study the wind turbine blades by calculating the drag and lift coefficients which were achieved by dividing the blades into 10 ascending segments. The twist angle and chord length of the blade are optimized to get the highest performance. Among the various airfoil types, the SG-6041 airfoil type was selected to build the blade structure. The calculated power coefficient was almost 0.4, which is considered high given that it was calculated under 10 m/s average wind speed and 1-meter length blade conditions. Where all the results are logical and reasonable, the software is proven to be reliable. The paper also evaluates the wind characteristics in different locations in Iraq in order to find the most optimal promising locations in Iraq.","PeriodicalId":49848,"journal":{"name":"Measurement Science Review","volume":"22 1","pages":"253 - 260"},"PeriodicalIF":0.9,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"69237326","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Palencár, R. Palencár, M. Chytil, G. Wimmer, G. Wimmer, V. Witkovský
Abstract We address the problem of linear comparative calibration, a special case of linear calibration where both variables are measured with errors, and the analysis of the uncertainty of the measurement results obtained with the calibrated instrument. The concept is explained in detail using the calibration experiment of the pressure transducer and the subsequent analysis of the measurement uncertainties. In this context, the calibration and the measurements with the calibrated instrument are performed according to ISO Technical Specification 28037:2010 (here referred to as ISO linear calibration), based on the approximate linear calibration model and the application of the law of propagation of uncertainty (LPU) in this approximate model. Alternatively, estimates of the calibration line parameters, their standard uncertainties, the coverage intervals and the associated probability distributions are obtained using the Monte Carlo method (MCM) based on the law of propagation of distributions (LPD). Here we also obtain the probability distributions and the coverage interval for the quantities measured with the calibrated instrument. Furthermore, motivated by the model structure of this particular example, we conducted a simulation study that presents the empirical coverage probabilities of the ISO and MCM coverage intervals and investigates the influence of the sample size, i.e. the number of calibration points in the measurement range, and the different combinations of measurement uncertainties. The study generally confirms the good properties and validity of the ISO technical specification within the considered (limited) framework of experimental designs motivated by real-world application, with small uncertainties in relation to the measurement range. We also point out the potential weaknesses of this method that require increased user attention and emphasise the need for further research in this area.
{"title":"ISO Linear Calibration and Measurement Uncertainty of the Result Obtained With the Calibrated Instrument","authors":"J. Palencár, R. Palencár, M. Chytil, G. Wimmer, G. Wimmer, V. Witkovský","doi":"10.2478/msr-2022-0037","DOIUrl":"https://doi.org/10.2478/msr-2022-0037","url":null,"abstract":"Abstract We address the problem of linear comparative calibration, a special case of linear calibration where both variables are measured with errors, and the analysis of the uncertainty of the measurement results obtained with the calibrated instrument. The concept is explained in detail using the calibration experiment of the pressure transducer and the subsequent analysis of the measurement uncertainties. In this context, the calibration and the measurements with the calibrated instrument are performed according to ISO Technical Specification 28037:2010 (here referred to as ISO linear calibration), based on the approximate linear calibration model and the application of the law of propagation of uncertainty (LPU) in this approximate model. Alternatively, estimates of the calibration line parameters, their standard uncertainties, the coverage intervals and the associated probability distributions are obtained using the Monte Carlo method (MCM) based on the law of propagation of distributions (LPD). Here we also obtain the probability distributions and the coverage interval for the quantities measured with the calibrated instrument. Furthermore, motivated by the model structure of this particular example, we conducted a simulation study that presents the empirical coverage probabilities of the ISO and MCM coverage intervals and investigates the influence of the sample size, i.e. the number of calibration points in the measurement range, and the different combinations of measurement uncertainties. The study generally confirms the good properties and validity of the ISO technical specification within the considered (limited) framework of experimental designs motivated by real-world application, with small uncertainties in relation to the measurement range. We also point out the potential weaknesses of this method that require increased user attention and emphasise the need for further research in this area.","PeriodicalId":49848,"journal":{"name":"Measurement Science Review","volume":"22 1","pages":"293 - 307"},"PeriodicalIF":0.9,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47616125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
K. Širůčková, P. Marcon, M. Dostál, A. Sirucková, P. Dohnal
Abstract Spectral computed tomography (CT) imaging is one of several image reconstruction techniques based on the use of dual-layer CT. The intensity and attenuation of the radiation are measured in relation to different wavelengths, and such a procedure results in complex three-dimensional (3D) imaging and (pseudo) color adjustment of the soft tissue. This paper compares true non-contrast (TNC) enhanced images with virtual non-contrast (VNC) enhanced ones. Virtual native images are acquired by means of spectral computed tomography, and it has been suggested that VNCs could potentially substitute real native images to reduce significantly the total radiation dose from multiphase spectral CT. A comparison was performed by defining certain parameters that represent the difference between the measured and the calculated values in the images. The parameters included the mean value and standard deviation of the computed tomography number, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). All of these items were analyzed via statistical tests using p-value. The results are interpreted and correlated with those presented by other authors, who, however, did not make an examination on a comprehensive basis - five tissues simultaneously by using a single device. Prospectively, if analogies were found between the two types of images, it would be possible to skip the TNC image, thus markedly reducing the radiation dose for the patient.
{"title":"Dual-Energy Spectral Computed Tomography: Comparing True and Virtual Non Contrast Enhanced Images","authors":"K. Širůčková, P. Marcon, M. Dostál, A. Sirucková, P. Dohnal","doi":"10.2478/msr-2022-0033","DOIUrl":"https://doi.org/10.2478/msr-2022-0033","url":null,"abstract":"Abstract Spectral computed tomography (CT) imaging is one of several image reconstruction techniques based on the use of dual-layer CT. The intensity and attenuation of the radiation are measured in relation to different wavelengths, and such a procedure results in complex three-dimensional (3D) imaging and (pseudo) color adjustment of the soft tissue. This paper compares true non-contrast (TNC) enhanced images with virtual non-contrast (VNC) enhanced ones. Virtual native images are acquired by means of spectral computed tomography, and it has been suggested that VNCs could potentially substitute real native images to reduce significantly the total radiation dose from multiphase spectral CT. A comparison was performed by defining certain parameters that represent the difference between the measured and the calculated values in the images. The parameters included the mean value and standard deviation of the computed tomography number, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR). All of these items were analyzed via statistical tests using p-value. The results are interpreted and correlated with those presented by other authors, who, however, did not make an examination on a comprehensive basis - five tissues simultaneously by using a single device. Prospectively, if analogies were found between the two types of images, it would be possible to skip the TNC image, thus markedly reducing the radiation dose for the patient.","PeriodicalId":49848,"journal":{"name":"Measurement Science Review","volume":"22 1","pages":"261 - 268"},"PeriodicalIF":0.9,"publicationDate":"2022-10-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43110121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dan Chen, Peng Hu, Zhong Zhou, Xiang Zhou, Shouyang Zhai, Yan Chen
Abstract The maximum electromagnetic field formed in the electrically large enclosures for a given input power has always been the focus of electromagnetic compatibility issues such as radiation sensitivity and shielding effectiveness. To model the maximums in a simple manner, the electrically large enclosure can be regarded as a reverberation chamber (RC), thus the generalized extreme value (GEV) theory based framework is used for both undermoded and overmoded frequencies. Since the mechanical stirrer is not easy to be installed like that for RC, frequency stirring and mechanical stirring related configurations are discussed, and the corresponding results have confirmed the validity of frequency stirring with the estimate of the parameters in GEV distribution. As for the maximum field, a comparison has been made between GEV distribution and IEC 61000-4-21, and the corresponding results have also highlighted that the maximum field can be assessed by frequency stirring configuration, and by GEV distribution with a desired confidence.
{"title":"On Modelling of Maximum Electromagnetic Field in Electrically Large Enclosures","authors":"Dan Chen, Peng Hu, Zhong Zhou, Xiang Zhou, Shouyang Zhai, Yan Chen","doi":"10.2478/msr-2022-0028","DOIUrl":"https://doi.org/10.2478/msr-2022-0028","url":null,"abstract":"Abstract The maximum electromagnetic field formed in the electrically large enclosures for a given input power has always been the focus of electromagnetic compatibility issues such as radiation sensitivity and shielding effectiveness. To model the maximums in a simple manner, the electrically large enclosure can be regarded as a reverberation chamber (RC), thus the generalized extreme value (GEV) theory based framework is used for both undermoded and overmoded frequencies. Since the mechanical stirrer is not easy to be installed like that for RC, frequency stirring and mechanical stirring related configurations are discussed, and the corresponding results have confirmed the validity of frequency stirring with the estimate of the parameters in GEV distribution. As for the maximum field, a comparison has been made between GEV distribution and IEC 61000-4-21, and the corresponding results have also highlighted that the maximum field can be assessed by frequency stirring configuration, and by GEV distribution with a desired confidence.","PeriodicalId":49848,"journal":{"name":"Measurement Science Review","volume":"22 1","pages":"225 - 230"},"PeriodicalIF":0.9,"publicationDate":"2022-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47707362","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xingchao Wang, Lin Chen, Qilong Wang, Jianli He, Li Tian, Jinshou Tian, Lingbin Shen, Yunji Wang
Abstract The electron collection efficiency (CE) of the photomultiplier tube based on microchannel plates (MCP-PMT) is limited by the MCP open area fraction. Coating MCP with a high secondary yield material is supposed to be an effective approach to improve CE. Both typical and coated MCP-PMTs are developed. A relative measurement method is proposed to characterize the collection efficiency performance. Results show that the PMT based on the coated MCPs has a significant improvement on CE, a good gain uniformity and a high precise energy resolution.
{"title":"The Collection Efficiency of a Large Area PMT Based on the Coated MCPs","authors":"Xingchao Wang, Lin Chen, Qilong Wang, Jianli He, Li Tian, Jinshou Tian, Lingbin Shen, Yunji Wang","doi":"10.2478/msr-2022-0030","DOIUrl":"https://doi.org/10.2478/msr-2022-0030","url":null,"abstract":"Abstract The electron collection efficiency (CE) of the photomultiplier tube based on microchannel plates (MCP-PMT) is limited by the MCP open area fraction. Coating MCP with a high secondary yield material is supposed to be an effective approach to improve CE. Both typical and coated MCP-PMTs are developed. A relative measurement method is proposed to characterize the collection efficiency performance. Results show that the PMT based on the coated MCPs has a significant improvement on CE, a good gain uniformity and a high precise energy resolution.","PeriodicalId":49848,"journal":{"name":"Measurement Science Review","volume":"22 1","pages":"241 - 245"},"PeriodicalIF":0.9,"publicationDate":"2022-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47436646","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract With the development of semiconductor assembly technology, the continuous requirement for the improvement of chip quality caused an increasing pressure on the assembly manufacturing process. The defects of chip pin had been mostly verified by manual inspection, which has low efficiency, high cost, and low reliability. In this paper, we propose a vision measurement method to detect the chip pin defects, such as the pin warping and collapse that heavily influence the quality of chip assembly. This task is performed by extracting the corner feature of the chip pins, computing the corresponding point pairs in the binocular sequence images, and reconstructing the target features of the chip. In the corner feature step, the corner detection of the pins using the gradient correlation matrices (GCM), and the feature point extraction of the chip package body surface using the crossing points of the fitting lines are introduced, respectively. After obtaining the corresponding point pairs, the feature points are utilized to reconstruct the three dimensional (3D) coordinate information in the binocular vision measurement system, and the key geometry dimension of the pins is computed, which reflects whether the quality of the chip pins is up to the standard. The proposed method is evaluated on the chip data, and the effectiveness is also verified by the comparison experiments.
{"title":"Automatic Detection of Chip Pin Defect in Semiconductor Assembly Using Vision Measurement","authors":"Shengfang Lu, Jian Zhang, Fei Hao, Liangbao Jiao","doi":"10.2478/msr-2022-0029","DOIUrl":"https://doi.org/10.2478/msr-2022-0029","url":null,"abstract":"Abstract With the development of semiconductor assembly technology, the continuous requirement for the improvement of chip quality caused an increasing pressure on the assembly manufacturing process. The defects of chip pin had been mostly verified by manual inspection, which has low efficiency, high cost, and low reliability. In this paper, we propose a vision measurement method to detect the chip pin defects, such as the pin warping and collapse that heavily influence the quality of chip assembly. This task is performed by extracting the corner feature of the chip pins, computing the corresponding point pairs in the binocular sequence images, and reconstructing the target features of the chip. In the corner feature step, the corner detection of the pins using the gradient correlation matrices (GCM), and the feature point extraction of the chip package body surface using the crossing points of the fitting lines are introduced, respectively. After obtaining the corresponding point pairs, the feature points are utilized to reconstruct the three dimensional (3D) coordinate information in the binocular vision measurement system, and the key geometry dimension of the pins is computed, which reflects whether the quality of the chip pins is up to the standard. The proposed method is evaluated on the chip data, and the effectiveness is also verified by the comparison experiments.","PeriodicalId":49848,"journal":{"name":"Measurement Science Review","volume":"22 1","pages":"231 - 240"},"PeriodicalIF":0.9,"publicationDate":"2022-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42545355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The existence of related faults between components brings great difficulties to the analysis of the importance of system components. How to quantify the influence of related faults and evaluate the importance of components is one of the hot issues in current research. In this paper, under the assumption that the fault propagation obeys the Markov process, the PageRank algorithm is integrated into the decision-making trial and evaluation laboratory (DEMATEL). On the basis, the calculation of influencing degree and influenced degree between components is studied to quantify the influence of related faults, and the problem of subjective evaluation of weight coefficient in traditional DEMATEL is solved. The rationality is verified through the method of combining the Interpretative Structural Modeling Method (ISM) and direct relation matrix. The importance of system related faults is identified accurately based on the calculation of center degree and cause degree, and the central-related faults of CNC machine tools are analyzed as an example to verify the effectiveness of the proposed method.
{"title":"Importance Analysis of System Related Fault Based on Improved Decision-Making Trial and Evaluation Laboratory","authors":"Yan Xu, Guixiang Shen","doi":"10.2478/msr-2022-0027","DOIUrl":"https://doi.org/10.2478/msr-2022-0027","url":null,"abstract":"Abstract The existence of related faults between components brings great difficulties to the analysis of the importance of system components. How to quantify the influence of related faults and evaluate the importance of components is one of the hot issues in current research. In this paper, under the assumption that the fault propagation obeys the Markov process, the PageRank algorithm is integrated into the decision-making trial and evaluation laboratory (DEMATEL). On the basis, the calculation of influencing degree and influenced degree between components is studied to quantify the influence of related faults, and the problem of subjective evaluation of weight coefficient in traditional DEMATEL is solved. The rationality is verified through the method of combining the Interpretative Structural Modeling Method (ISM) and direct relation matrix. The importance of system related faults is identified accurately based on the calculation of center degree and cause degree, and the central-related faults of CNC machine tools are analyzed as an example to verify the effectiveness of the proposed method.","PeriodicalId":49848,"journal":{"name":"Measurement Science Review","volume":"22 1","pages":"214 - 224"},"PeriodicalIF":0.9,"publicationDate":"2022-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43953916","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract In aircraft construction, when creating samples of new equipment, shock tests are often performed, both on individual components and the entire product. It requires introducing non-destructive testing devices into production, it is one of the most important factors in accelerating scientific and technological progress, raising the quality and competitiveness of manufactured products. Applying modern means of non-destructive testing, there is the problem of their protection from external vibrations, which affect the sensitivity, accuracy and reliability of high-precision measurements. In such cases, the conversion of measuring information during powerful vibration and shock tests, as a rule, is carried out by piezoelectric acceleration sensors. Although to provide impact testing, there is a need to develop and use stand-alone recorders. The main requirements for these recorders are to ensure the autonomy and operability of the recorder onboard the test product and to ensure the synchronization of the registration of the shock load.
{"title":"Determination of Dynamic Range of Stand-alone Shock Recorders","authors":"A. Stakhova, Yurii Kyrychuk, N. Nazarenko","doi":"10.2478/msr-2022-0026","DOIUrl":"https://doi.org/10.2478/msr-2022-0026","url":null,"abstract":"Abstract In aircraft construction, when creating samples of new equipment, shock tests are often performed, both on individual components and the entire product. It requires introducing non-destructive testing devices into production, it is one of the most important factors in accelerating scientific and technological progress, raising the quality and competitiveness of manufactured products. Applying modern means of non-destructive testing, there is the problem of their protection from external vibrations, which affect the sensitivity, accuracy and reliability of high-precision measurements. In such cases, the conversion of measuring information during powerful vibration and shock tests, as a rule, is carried out by piezoelectric acceleration sensors. Although to provide impact testing, there is a need to develop and use stand-alone recorders. The main requirements for these recorders are to ensure the autonomy and operability of the recorder onboard the test product and to ensure the synchronization of the registration of the shock load.","PeriodicalId":49848,"journal":{"name":"Measurement Science Review","volume":"22 1","pages":"208 - 213"},"PeriodicalIF":0.9,"publicationDate":"2022-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41393874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}