Pub Date : 2013-10-25eCollection Date: 2013-01-01DOI: 10.6028/jres.118.019
Y Tina Lee, Deogratias Kibira, Allison Barnard Feeney, Jennifer Marshall
Current ambulance designs are ergonomically inefficient and often times unsafe for practical treatment response to medical emergencies. Thus, the patient compartment of a moving ambulance is a hazardous working environment. As a consequence, emergency medical services (EMS) workers suffer fatalities and injuries that far exceed those of the average work place in the United States. To reduce injury and mortality rates in ambulances, the Department of Homeland Security Science and Technology Directorate has teamed with the National Institute of Standards and Technology, the National Institute for Occupational Safety and Health, and BMT Designers & Planners in a joint project to produce science-based ambulance patient compartment design standards. This project will develop new crash-safety design standards and improved user-design interface guidance for patient compartments that are safer for EMS personnel and patients, and facilitate improved patient care. The project team has been working with practitioners, EMS workers' organizations, and manufacturers to solicit needs and requirements to address related issues. This paper presents an analysis of practitioners' concerns, needs, and requirements for improved designs elicited through the web-based survey of ambulance design, held by the National Institute of Standards and Technology. This paper also introduces the survey, analyzes the survey results, and discusses recommendations for future ambulance patient compartments design.
{"title":"Ambulance Design Survey 2011: A Summary Report.","authors":"Y Tina Lee, Deogratias Kibira, Allison Barnard Feeney, Jennifer Marshall","doi":"10.6028/jres.118.019","DOIUrl":"https://doi.org/10.6028/jres.118.019","url":null,"abstract":"<p><p>Current ambulance designs are ergonomically inefficient and often times unsafe for practical treatment response to medical emergencies. Thus, the patient compartment of a moving ambulance is a hazardous working environment. As a consequence, emergency medical services (EMS) workers suffer fatalities and injuries that far exceed those of the average work place in the United States. To reduce injury and mortality rates in ambulances, the Department of Homeland Security Science and Technology Directorate has teamed with the National Institute of Standards and Technology, the National Institute for Occupational Safety and Health, and BMT Designers & Planners in a joint project to produce science-based ambulance patient compartment design standards. This project will develop new crash-safety design standards and improved user-design interface guidance for patient compartments that are safer for EMS personnel and patients, and facilitate improved patient care. The project team has been working with practitioners, EMS workers' organizations, and manufacturers to solicit needs and requirements to address related issues. This paper presents an analysis of practitioners' concerns, needs, and requirements for improved designs elicited through the web-based survey of ambulance design, held by the National Institute of Standards and Technology. This paper also introduces the survey, analyzes the survey results, and discusses recommendations for future ambulance patient compartments design. </p>","PeriodicalId":17039,"journal":{"name":"Journal of Research of the National Institute of Standards and Technology","volume":"118 ","pages":"381-95"},"PeriodicalIF":1.5,"publicationDate":"2013-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.6028/jres.118.019","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34200337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-26eCollection Date: 2013-01-01DOI: 10.6028/jres.118.016
Patrick J Abbott, Zeina J Kubarych
The SI unit of mass, the kilogram, is the only remaining artifact definition in the seven fundamental units of the SI system. It will be redefined in terms of the Planck constant as soon as certain experimental conditions, based on recommendations of the Consultative Committee for Mass and Related Quantities (CCM) are met. To better reflect reality, the redefinition will likely be accompanied by an increase in the uncertainties that National Metrology Institutes (NMIs) pass on to customers via artifact dissemination, which could have an impact on the reference standards that are used by secondary calibration laboratories if certain weight tolerances are adopted for use. This paper will compare the legal metrology requirements for precision mass calibration laboratories after the kilogram is redefined with the current capabilities based on the international prototype kilogram (IPK) realization of the kilogram.
{"title":"The New Kilogram Definition and its Implications for High-Precision Mass Tolerance Classes.","authors":"Patrick J Abbott, Zeina J Kubarych","doi":"10.6028/jres.118.016","DOIUrl":"https://doi.org/10.6028/jres.118.016","url":null,"abstract":"<p><p>The SI unit of mass, the kilogram, is the only remaining artifact definition in the seven fundamental units of the SI system. It will be redefined in terms of the Planck constant as soon as certain experimental conditions, based on recommendations of the Consultative Committee for Mass and Related Quantities (CCM) are met. To better reflect reality, the redefinition will likely be accompanied by an increase in the uncertainties that National Metrology Institutes (NMIs) pass on to customers via artifact dissemination, which could have an impact on the reference standards that are used by secondary calibration laboratories if certain weight tolerances are adopted for use. This paper will compare the legal metrology requirements for precision mass calibration laboratories after the kilogram is redefined with the current capabilities based on the international prototype kilogram (IPK) realization of the kilogram. </p>","PeriodicalId":17039,"journal":{"name":"Journal of Research of the National Institute of Standards and Technology","volume":"118 ","pages":"353-8"},"PeriodicalIF":1.5,"publicationDate":"2013-08-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.6028/jres.118.016","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34200334","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-19eCollection Date: 2013-01-01DOI: 10.6028/jres.118.015
L F Goodrich, N Cheggour, T C Stauffer, B J Filla, X F Lu
We review variable-temperature, transport critical-current (I c) measurements made on commercial superconductors over a range of critical currents from less than 0.1 A to about 1 kA. We have developed and used a number of systems to make these measurements over the last 15 years. Two exemplary variable-temperature systems with coil sample geometries will be described: a probe that is only variable-temperature and a probe that is variable-temperature and variable-strain. The most significant challenge for these measurements is temperature stability, since large amounts of heat can be generated by the flow of high current through the resistive sample fixture. Therefore, a significant portion of this review is focused on the reduction of temperature errors to less than ±0.05 K in such measurements. A key feature of our system is a pre-regulator that converts a flow of liquid helium to gas and heats the gas to a temperature close to the target sample temperature. The pre-regulator is not in close proximity to the sample and it is controlled independently of the sample temperature. This allows us to independently control the total cooling power, and thereby fine tune the sample cooling power at any sample temperature. The same general temperature-control philosophy is used in all of our variable-temperature systems, but the addition of another variable, such as strain, forces compromises in design and results in some differences in operation and protocol. These aspects are analyzed to assess the extent to which the protocols for our systems might be generalized to other systems at other laboratories. Our approach to variable-temperature measurements is also placed in the general context of measurement-system design, and the perceived advantages and disadvantages of design choices are presented. To verify the accuracy of the variable-temperature measurements, we compared critical-current values obtained on a specimen immersed in liquid helium ("liquid" or I c liq) at 5 K to those measured on the same specimen in flowing helium gas ("gas" or I c gas) at the same temperature. These comparisons indicate the temperature control is effective over the superconducting wire length between the voltage taps, and this condition is valid for all types of sample investigated, including Nb-Ti, Nb3Sn, and MgB2 wires. The liquid/gas comparisons are used to study the variable-temperature measurement protocol that was necessary to obtain the "correct" critical current, which was assumed to be the I c liq. We also calibrated the magnetoresistance effect of resistive thermometers for temperatures from 4 K to 35 K and magnetic fields from 0 T to 16 T. This calibration reduces systematic errors in the variable-temperature data, but it does not affect the liquid/gas comparison since the same thermometers are used in both cases.
{"title":"Kiloampere, Variable-Temperature, Critical-Current Measurements of High-Field Superconductors.","authors":"L F Goodrich, N Cheggour, T C Stauffer, B J Filla, X F Lu","doi":"10.6028/jres.118.015","DOIUrl":"https://doi.org/10.6028/jres.118.015","url":null,"abstract":"<p><p>We review variable-temperature, transport critical-current (I c) measurements made on commercial superconductors over a range of critical currents from less than 0.1 A to about 1 kA. We have developed and used a number of systems to make these measurements over the last 15 years. Two exemplary variable-temperature systems with coil sample geometries will be described: a probe that is only variable-temperature and a probe that is variable-temperature and variable-strain. The most significant challenge for these measurements is temperature stability, since large amounts of heat can be generated by the flow of high current through the resistive sample fixture. Therefore, a significant portion of this review is focused on the reduction of temperature errors to less than ±0.05 K in such measurements. A key feature of our system is a pre-regulator that converts a flow of liquid helium to gas and heats the gas to a temperature close to the target sample temperature. The pre-regulator is not in close proximity to the sample and it is controlled independently of the sample temperature. This allows us to independently control the total cooling power, and thereby fine tune the sample cooling power at any sample temperature. The same general temperature-control philosophy is used in all of our variable-temperature systems, but the addition of another variable, such as strain, forces compromises in design and results in some differences in operation and protocol. These aspects are analyzed to assess the extent to which the protocols for our systems might be generalized to other systems at other laboratories. Our approach to variable-temperature measurements is also placed in the general context of measurement-system design, and the perceived advantages and disadvantages of design choices are presented. To verify the accuracy of the variable-temperature measurements, we compared critical-current values obtained on a specimen immersed in liquid helium (\"liquid\" or I c liq) at 5 K to those measured on the same specimen in flowing helium gas (\"gas\" or I c gas) at the same temperature. These comparisons indicate the temperature control is effective over the superconducting wire length between the voltage taps, and this condition is valid for all types of sample investigated, including Nb-Ti, Nb3Sn, and MgB2 wires. The liquid/gas comparisons are used to study the variable-temperature measurement protocol that was necessary to obtain the \"correct\" critical current, which was assumed to be the I c liq. We also calibrated the magnetoresistance effect of resistive thermometers for temperatures from 4 K to 35 K and magnetic fields from 0 T to 16 T. This calibration reduces systematic errors in the variable-temperature data, but it does not affect the liquid/gas comparison since the same thermometers are used in both cases. </p>","PeriodicalId":17039,"journal":{"name":"Journal of Research of the National Institute of Standards and Technology","volume":"118 ","pages":"301-52"},"PeriodicalIF":1.5,"publicationDate":"2013-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.6028/jres.118.015","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34200333","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-15eCollection Date: 2013-01-01DOI: 10.6028/jres.118.017
B E Zimmerman, L Pibida, L E King, D E Bergeron, J T Cessna, M M Mille
The International Atomic Energy Agency (IAEA) has organized an international comparison to assess Single Photon Emission Computed Tomography (SPECT) image quantification capabilities in 12 countries. Iodine-131 was chosen as the radionuclide for the comparison because of its wide use around the world, but for logistical reasons solid (133)Ba sources were used as a long-lived surrogate for (131)I. For this study, we designed a set of solid cylindrical sources so that each site could have a set of phantoms (having nominal volumes of 2 mL, 4 mL, 6 mL, and 23 mL) with traceable activity calibrations so that the results could be properly compared. We also developed a technique using two different detection methods for individually calibrating the sources for (133)Ba activity based on a National standard. This methodology allows for the activity calibration of each (133)Ba source with a standard uncertainty on the activity of 1.4 % for the high-level 2-, 4-, and 6-mL sources and 1.7 % for the lower-level 23 mL cylinders. This level of uncertainty allows for these sources to be used for the intended comparison exercise, as well as in other SPECT image quantification studies.
{"title":"Calibration of Traceable Solid Mock (131)I Phantoms Used in an International SPECT Image Quantification Comparison.","authors":"B E Zimmerman, L Pibida, L E King, D E Bergeron, J T Cessna, M M Mille","doi":"10.6028/jres.118.017","DOIUrl":"10.6028/jres.118.017","url":null,"abstract":"<p><p>The International Atomic Energy Agency (IAEA) has organized an international comparison to assess Single Photon Emission Computed Tomography (SPECT) image quantification capabilities in 12 countries. Iodine-131 was chosen as the radionuclide for the comparison because of its wide use around the world, but for logistical reasons solid (133)Ba sources were used as a long-lived surrogate for (131)I. For this study, we designed a set of solid cylindrical sources so that each site could have a set of phantoms (having nominal volumes of 2 mL, 4 mL, 6 mL, and 23 mL) with traceable activity calibrations so that the results could be properly compared. We also developed a technique using two different detection methods for individually calibrating the sources for (133)Ba activity based on a National standard. This methodology allows for the activity calibration of each (133)Ba source with a standard uncertainty on the activity of 1.4 % for the high-level 2-, 4-, and 6-mL sources and 1.7 % for the lower-level 23 mL cylinders. This level of uncertainty allows for these sources to be used for the intended comparison exercise, as well as in other SPECT image quantification studies. </p>","PeriodicalId":17039,"journal":{"name":"Journal of Research of the National Institute of Standards and Technology","volume":"118 ","pages":"359-74"},"PeriodicalIF":1.5,"publicationDate":"2013-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4487311/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34200335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-08-15eCollection Date: 2013-01-01DOI: 10.6028/jres.118.018
Dylan A Heberle, Zachary H Levine
We propose a method to extend the frequency range of polarization entanglement in periodically poled rubidium-doped potassium titanyl phosphate (Rb:KTP) waveguides. Our calculations predict that output wavelengths from 1130 nm to 1257 nm may be achieved using Rb:KTP by the appropriate selection of a direction of propagation for the waveguide. The fidelity using a poling period of 1 mm is approximately 0.98.
{"title":"Polarization-Entangled Photon Pairs From Periodically-Poled Crystalline Waveguides Over a Range of Frequencies.","authors":"Dylan A Heberle, Zachary H Levine","doi":"10.6028/jres.118.018","DOIUrl":"https://doi.org/10.6028/jres.118.018","url":null,"abstract":"<p><p>We propose a method to extend the frequency range of polarization entanglement in periodically poled rubidium-doped potassium titanyl phosphate (Rb:KTP) waveguides. Our calculations predict that output wavelengths from 1130 nm to 1257 nm may be achieved using Rb:KTP by the appropriate selection of a direction of propagation for the waveguide. The fidelity using a poling period of 1 mm is approximately 0.98. </p>","PeriodicalId":17039,"journal":{"name":"Journal of Research of the National Institute of Standards and Technology","volume":"118 ","pages":"375-80"},"PeriodicalIF":1.5,"publicationDate":"2013-08-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.6028/jres.118.018","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34200336","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-06eCollection Date: 2013-01-01DOI: 10.6028/jres.118.013
Marek Franaszek
When two six degrees of freedom (6DOF) datasets are registered, a transformation is sought that minimizes the misalignment between the two datasets. Commonly, the measure of misalignment is the sum of the positional and rotational components. This measure has a dimensional mismatch between the positional component (unbounded and having length units) and the rotational component (bounded and dimensionless). The mismatch can be formally corrected by dividing the positional component by some scale factor with units of length. However, the scale factor is set arbitrarily and, depending on its value, more or less importance is associated with the positional component relative to the rotational component. This may result in a poorer registration. In this paper, a new method is introduced that uses the same form of bounded, dimensionless measure of misalignment for both components. Numerical simulations with a wide range of variances of positional and rotational noise show that the transformation obtained by this method is very close to ground truth. Additionally, knowledge of the contribution of noise to the misalignment from individual components enables the formulation of a rational method to handle noise in 6DOF data.
{"title":"Registration of Six Degrees of Freedom Data with Proper Handling of Positional and Rotational Noise.","authors":"Marek Franaszek","doi":"10.6028/jres.118.013","DOIUrl":"https://doi.org/10.6028/jres.118.013","url":null,"abstract":"<p><p>When two six degrees of freedom (6DOF) datasets are registered, a transformation is sought that minimizes the misalignment between the two datasets. Commonly, the measure of misalignment is the sum of the positional and rotational components. This measure has a dimensional mismatch between the positional component (unbounded and having length units) and the rotational component (bounded and dimensionless). The mismatch can be formally corrected by dividing the positional component by some scale factor with units of length. However, the scale factor is set arbitrarily and, depending on its value, more or less importance is associated with the positional component relative to the rotational component. This may result in a poorer registration. In this paper, a new method is introduced that uses the same form of bounded, dimensionless measure of misalignment for both components. Numerical simulations with a wide range of variances of positional and rotational noise show that the transformation obtained by this method is very close to ground truth. Additionally, knowledge of the contribution of noise to the misalignment from individual components enables the formulation of a rational method to handle noise in 6DOF data. </p>","PeriodicalId":17039,"journal":{"name":"Journal of Research of the National Institute of Standards and Technology","volume":"118 ","pages":"280-91"},"PeriodicalIF":1.5,"publicationDate":"2013-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.6028/jres.118.013","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34200331","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-28eCollection Date: 2013-01-01DOI: 10.6028/jres.118.014
L Pibida, M Mille, B Norman
Several measurements and calculations were performed to illustrate the differences that can be observed in the determination of exposure rate or ambient dose equivalent rate used for testing radiation detection systems against consensus standards. The large variations observed support our recommendation that better consistency in the test radiation fields can be achieved by specifying the source activity and testing distance instead of the field strength.
{"title":"Recommendations for Improving Consistency in the Radiation Fields Used During Testing of Radiation Detection Instruments for Homeland Security Applications.","authors":"L Pibida, M Mille, B Norman","doi":"10.6028/jres.118.014","DOIUrl":"https://doi.org/10.6028/jres.118.014","url":null,"abstract":"<p><p>Several measurements and calculations were performed to illustrate the differences that can be observed in the determination of exposure rate or ambient dose equivalent rate used for testing radiation detection systems against consensus standards. The large variations observed support our recommendation that better consistency in the test radiation fields can be achieved by specifying the source activity and testing distance instead of the field strength. </p>","PeriodicalId":17039,"journal":{"name":"Journal of Research of the National Institute of Standards and Technology","volume":"118 ","pages":"292-300"},"PeriodicalIF":1.5,"publicationDate":"2013-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.6028/jres.118.014","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34200332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-05-22eCollection Date: 2013-01-01DOI: 10.6028/jres.118.012
David Flater, William F Guthrie
Programmers routinely omit run-time safety checks from applications because they assume that these safety checks would degrade performance. The simplest example is the use of arrays or array-like data structures that do not enforce the constraint that indices must be within bounds. This report documents an attempt to measure the performance penalty incurred by two different implementations of bounds-checking in C and C++ using a simple benchmark and a desktop PC with a modern superscalar CPU. The benchmark consisted of a loop that wrote to array elements in sequential order. With this configuration, relative to the best performance observed for any access method in C or C++, mean degradation of only (0.881 ± 0.009) % was measured for a standard bounds-checking access method in C++. This case study showed the need for further work to develop and refine measurement methods and to perform more comparisons of this type. Comparisons across different use cases, configurations, programming languages, and environments are needed to determine under what circumstances (if any) the performance advantage of unchecked access is actually sufficient to outweigh the negative consequences for security and software quality.
程序员通常会在应用程序中省略运行时安全检查,因为他们认为这些安全检查会降低性能。最简单的例子就是使用数组或类似数组的数据结构,而不执行索引必须在边界内的约束。本报告记录了在 C 和 C++ 中使用两种不同的边界检查实现所造成的性能损失,并使用了一个简单的基准和一台配备现代超标量 CPU 的台式 PC。该基准测试包括一个按顺序写入数组元素的循环。在这种配置下,相对于在 C 或 C++ 中观察到的任何访问方法的最佳性能,在 C++ 中测量到的标准边界检验访问方法的平均性能下降率仅为(0.881 ± 0.009)%。这项案例研究表明,有必要进一步开发和完善测量方法,并进行更多此类比较。需要对不同的使用案例、配置、编程语言和环境进行比较,以确定在什么情况下(如果有的话),未检查访问的性能优势实际上足以抵消对安全性和软件质量造成的负面影响。
{"title":"A Case Study of Performance Degradation Attributable to Run-Time Bounds Checks on C++ Vector Access.","authors":"David Flater, William F Guthrie","doi":"10.6028/jres.118.012","DOIUrl":"10.6028/jres.118.012","url":null,"abstract":"<p><p>Programmers routinely omit run-time safety checks from applications because they assume that these safety checks would degrade performance. The simplest example is the use of arrays or array-like data structures that do not enforce the constraint that indices must be within bounds. This report documents an attempt to measure the performance penalty incurred by two different implementations of bounds-checking in C and C++ using a simple benchmark and a desktop PC with a modern superscalar CPU. The benchmark consisted of a loop that wrote to array elements in sequential order. With this configuration, relative to the best performance observed for any access method in C or C++, mean degradation of only (0.881 ± 0.009) % was measured for a standard bounds-checking access method in C++. This case study showed the need for further work to develop and refine measurement methods and to perform more comparisons of this type. Comparisons across different use cases, configurations, programming languages, and environments are needed to determine under what circumstances (if any) the performance advantage of unchecked access is actually sufficient to outweigh the negative consequences for security and software quality. </p>","PeriodicalId":17039,"journal":{"name":"Journal of Research of the National Institute of Standards and Technology","volume":"118 ","pages":"260-79"},"PeriodicalIF":1.5,"publicationDate":"2013-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4487316/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34200330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-24eCollection Date: 2013-01-01DOI: 10.6028/jres.118.010
Alfred S Carasso
Identifying sources of ground water pollution, and deblurring nanoscale imagery as well as astronomical galaxy images, are two important applications involving numerical computation of parabolic equations backward in time. Surprisingly, very little is known about backward continuation in nonlinear parabolic equations. In this paper, an iterative procedure originating in spectroscopy in the 1930's, is adapted into a useful tool for solving a wide class of 2D nonlinear backward parabolic equations. In addition, previously unsuspected difficulties are uncovered that may preclude useful backward continuation in parabolic equations deviating too strongly from the linear, autonomous, self adjoint, canonical model. This paper explores backward continuation in selected 2D nonlinear equations, by creating fictitious blurred images obtained by using several sharp images as initial data in these equations, and capturing the corresponding solutions at some positive time T. Successful backward continuation from t=T to t = 0, would recover the original sharp image. Visual recognition provides meaningful evaluation of the degree of success or failure in the reconstructed solutions. Instructive examples are developed, illustrating the unexpected influence of certain types of nonlinearities. Visually and statistically indistinguishable blurred images are presented, with vastly different deblurring results. These examples indicate that how an image is nonlinearly blurred is critical, in addition to the amount of blur. The equations studied represent nonlinear generalizations of Brownian motion, and the blurred images may be interpreted as visually expressing the results of novel stochastic processes.
{"title":"Hazardous Continuation Backward in Time in Nonlinear Parabolic Equations, and an Experiment in Deblurring Nonlinearly Blurred Imagery.","authors":"Alfred S Carasso","doi":"10.6028/jres.118.010","DOIUrl":"https://doi.org/10.6028/jres.118.010","url":null,"abstract":"<p><p>Identifying sources of ground water pollution, and deblurring nanoscale imagery as well as astronomical galaxy images, are two important applications involving numerical computation of parabolic equations backward in time. Surprisingly, very little is known about backward continuation in nonlinear parabolic equations. In this paper, an iterative procedure originating in spectroscopy in the 1930's, is adapted into a useful tool for solving a wide class of 2D nonlinear backward parabolic equations. In addition, previously unsuspected difficulties are uncovered that may preclude useful backward continuation in parabolic equations deviating too strongly from the linear, autonomous, self adjoint, canonical model. This paper explores backward continuation in selected 2D nonlinear equations, by creating fictitious blurred images obtained by using several sharp images as initial data in these equations, and capturing the corresponding solutions at some positive time T. Successful backward continuation from t=T to t = 0, would recover the original sharp image. Visual recognition provides meaningful evaluation of the degree of success or failure in the reconstructed solutions. Instructive examples are developed, illustrating the unexpected influence of certain types of nonlinearities. Visually and statistically indistinguishable blurred images are presented, with vastly different deblurring results. These examples indicate that how an image is nonlinearly blurred is critical, in addition to the amount of blur. The equations studied represent nonlinear generalizations of Brownian motion, and the blurred images may be interpreted as visually expressing the results of novel stochastic processes. </p>","PeriodicalId":17039,"journal":{"name":"Journal of Research of the National Institute of Standards and Technology","volume":"118 ","pages":"199-217"},"PeriodicalIF":1.5,"publicationDate":"2013-04-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.6028/jres.118.010","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34031221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-04-23eCollection Date: 2013-01-01DOI: 10.6028/jres.118.011
Yooyoung Lee, Ross J Micheals, James J Filliben, P Jonathon Phillips
The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST's measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform.
{"title":"VASIR: An Open-Source Research Platform for Advanced Iris Recognition Technologies.","authors":"Yooyoung Lee, Ross J Micheals, James J Filliben, P Jonathon Phillips","doi":"10.6028/jres.118.011","DOIUrl":"https://doi.org/10.6028/jres.118.011","url":null,"abstract":"<p><p>The performance of iris recognition systems is frequently affected by input image quality, which in turn is vulnerable to less-than-optimal conditions due to illuminations, environments, and subject characteristics (e.g., distance, movement, face/body visibility, blinking, etc.). VASIR (Video-based Automatic System for Iris Recognition) is a state-of-the-art NIST-developed iris recognition software platform designed to systematically address these vulnerabilities. We developed VASIR as a research tool that will not only provide a reference (to assess the relative performance of alternative algorithms) for the biometrics community, but will also advance (via this new emerging iris recognition paradigm) NIST's measurement mission. VASIR is designed to accommodate both ideal (e.g., classical still images) and less-than-ideal images (e.g., face-visible videos). VASIR has three primary modules: 1) Image Acquisition 2) Video Processing, and 3) Iris Recognition. Each module consists of several sub-components that have been optimized by use of rigorous orthogonal experiment design and analysis techniques. We evaluated VASIR performance using the MBGC (Multiple Biometric Grand Challenge) NIR (Near-Infrared) face-visible video dataset and the ICE (Iris Challenge Evaluation) 2005 still-based dataset. The results showed that even though VASIR was primarily developed and optimized for the less-constrained video case, it still achieved high verification rates for the traditional still-image case. For this reason, VASIR may be used as an effective baseline for the biometrics community to evaluate their algorithm performance, and thus serves as a valuable research platform. </p>","PeriodicalId":17039,"journal":{"name":"Journal of Research of the National Institute of Standards and Technology","volume":"118 ","pages":"218-59"},"PeriodicalIF":1.5,"publicationDate":"2013-04-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.6028/jres.118.011","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"34031222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}