Pub Date : 2017-01-31DOI: 10.1186/s40679-017-0039-0
Benedikt J. Daurer, Hari Krishnan, Talita Perciano, Filipe R. N. C. Maia, David A. Shapiro, James A. Sethian, Stefano Marchesini
The ever improving brightness of accelerator based sources is enabling novel observations and discoveries with faster frame rates, larger fields of view, higher resolution, and higher dimensionality.
Here we present an integrated software/algorithmic framework designed to capitalize on high-throughput experiments through efficient kernels, load-balanced workflows, which are scalable in design. We describe the streamlined processing pipeline of ptychography data analysis.
The pipeline provides throughput, compression, and resolution as well as rapid feedback to the microscope operators.
{"title":"Nanosurveyor: a framework for real-time data processing","authors":"Benedikt J. Daurer, Hari Krishnan, Talita Perciano, Filipe R. N. C. Maia, David A. Shapiro, James A. Sethian, Stefano Marchesini","doi":"10.1186/s40679-017-0039-0","DOIUrl":"https://doi.org/10.1186/s40679-017-0039-0","url":null,"abstract":"<p>The ever improving brightness of accelerator based sources is enabling novel observations and discoveries with faster frame rates, larger fields of view, higher resolution, and higher dimensionality.</p><p>Here we present an integrated software/algorithmic framework designed to capitalize on high-throughput experiments through efficient kernels, load-balanced workflows, which are scalable in design. We describe the streamlined processing pipeline of ptychography data analysis.</p><p>The pipeline provides throughput, compression, and resolution as well as rapid feedback to the microscope operators.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"3 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2017-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-017-0039-0","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4050615","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-28DOI: 10.1186/s40679-017-0040-7
Tekin Bicer, Doğa Gürsoy, Vincent De Andrade, Rajkumar Kettimuthu, William Scullin, Francesco De Carlo, Ian T. Foster
Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis.
We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source.
Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration.
The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.
{"title":"Trace: a high-throughput tomographic reconstruction engine for large-scale datasets","authors":"Tekin Bicer, Doğa Gürsoy, Vincent De Andrade, Rajkumar Kettimuthu, William Scullin, Francesco De Carlo, Ian T. Foster","doi":"10.1186/s40679-017-0040-7","DOIUrl":"https://doi.org/10.1186/s40679-017-0040-7","url":null,"abstract":"<p>Modern synchrotron light sources and detectors produce data at such scale and complexity that large-scale computation is required to unleash their full power. One of the widely used imaging techniques that generates data at tens of gigabytes per second is computed tomography (CT). Although CT experiments result in rapid data generation, the analysis and reconstruction of the collected data may require hours or even days of computation time with a medium-sized workstation, which hinders the scientific progress that relies on the results of analysis.</p><p>We present Trace, a data-intensive computing engine that we have developed to enable high-performance implementation of iterative tomographic reconstruction algorithms for parallel computers. Trace provides fine-grained reconstruction of tomography datasets using both (thread-level) shared memory and (process-level) distributed memory parallelization. Trace utilizes a special data structure called replicated reconstruction object to maximize application performance. We also present the optimizations that we apply to the replicated reconstruction objects and evaluate them using tomography datasets collected at the Advanced Photon Source.\u0000</p><p>Our experimental evaluations show that our optimizations and parallelization techniques can provide 158× speedup using 32 compute nodes (384 cores) over a single-core configuration and decrease the end-to-end processing time of a large sinogram (with 4501 × 1 × 22,400 dimensions) from 12.5 h to <5 min per iteration.</p><p>The proposed tomographic reconstruction engine can efficiently process large-scale tomographic data using many compute nodes and minimize reconstruction times.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"3 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2017-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-017-0040-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"5079740","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-19DOI: 10.1186/s40679-016-0036-8
Francesco Brun, Lorenzo Massimi, Michela Fratini, Diego Dreossi, Fulvio Billé, Agostino Accardo, Roberto Pugliese, Alessia Cedola
When considering the acquisition of experimental synchrotron radiation (SR) X-ray CT data, the reconstruction workflow cannot be limited to the essential computational steps of flat fielding and filtered back projection (FBP). More refined image processing is often required, usually to compensate artifacts and enhance the quality of the reconstructed images. In principle, it would be desirable to optimize the reconstruction workflow at the facility during the experiment (beamtime). However, several practical factors affect the image reconstruction part of the experiment and users are likely to conclude the beamtime with sub-optimal reconstructed images. Through an example of application, this article presents SYRMEP Tomo Project (STP), an open-source software tool conceived to let users design custom CT reconstruction workflows. STP has been designed for post-beamtime (off-line use) and for a new reconstruction of past archived data at user’s home institution where simple computing resources are available. Releases of the software can be downloaded at the Elettra Scientific Computing group GitHub repository https://github.com/ElettraSciComp/STP-Gui.
{"title":"SYRMEP Tomo Project: a graphical user interface for customizing CT reconstruction workflows","authors":"Francesco Brun, Lorenzo Massimi, Michela Fratini, Diego Dreossi, Fulvio Billé, Agostino Accardo, Roberto Pugliese, Alessia Cedola","doi":"10.1186/s40679-016-0036-8","DOIUrl":"https://doi.org/10.1186/s40679-016-0036-8","url":null,"abstract":"<p>When considering the acquisition of experimental synchrotron radiation (SR) X-ray CT data, the reconstruction workflow cannot be limited to the essential computational steps of flat fielding and filtered back projection (FBP). More refined image processing is often required, usually to compensate artifacts and enhance the quality of the reconstructed images. In principle, it would be desirable to optimize the reconstruction workflow at the facility during the experiment (beamtime). However, several practical factors affect the image reconstruction part of the experiment and users are likely to conclude the beamtime with sub-optimal reconstructed images. Through an example of application, this article presents <i>SYRMEP Tomo Project</i> (STP), an open-source software tool conceived to let users design custom CT reconstruction workflows. STP has been designed for post-beamtime (off-line use) and for a new reconstruction of past archived data at user’s home institution where simple computing resources are available. Releases of the software can be downloaded at the Elettra Scientific Computing group GitHub repository https://github.com/ElettraSciComp/STP-Gui.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"3 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2017-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-016-0036-8","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4750241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-19DOI: 10.1186/s40679-017-0038-1
Pierre Paleo, Alessandro Mirone
We propose an efficient implementation of an interior tomography reconstruction method based on a known subregion. This method iteratively refines a reconstruction, aiming at reducing the local tomography artifacts. To cope with the ever increasing data volumes, this method is highly optimized on two aspects: firstly, the problem is reformulated to reduce the number of variables, and secondly, the operators involved in the optimization algorithms are efficiently implemented. Results show that (4096^2) slices can be processed in tens of seconds, while being beyond the reach of equivalent exact local tomography method.
{"title":"Efficient implementation of a local tomography reconstruction algorithm","authors":"Pierre Paleo, Alessandro Mirone","doi":"10.1186/s40679-017-0038-1","DOIUrl":"https://doi.org/10.1186/s40679-017-0038-1","url":null,"abstract":"<p>We propose an efficient implementation of an interior tomography reconstruction method based on a known subregion. This method iteratively refines a reconstruction, aiming at reducing the local tomography artifacts. To cope with the ever increasing data volumes, this method is highly optimized on two aspects: firstly, the problem is reformulated to reduce the number of variables, and secondly, the operators involved in the optimization algorithms are efficiently implemented. Results show that <span>(4096^2)</span> slices can be processed in tens of seconds, while being beyond the reach of equivalent exact local tomography method.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"3 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2017-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-017-0038-1","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4750240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-14DOI: 10.1186/s40679-016-0037-7
J. Thayer, D. Damiani, C. Ford, M. Dubrovin, I. Gaponenko, C. P. O’Grady, W. Kroeger, J. Pines, T. J. Lane, A. Salnikov, D. Schneider, T. Tookey, M. Weaver, C. H. Yoon, A. Perazzo
The data systems for X-ray free-electron laser (FEL) experiments at the Linac coherent light source (LCLS) are described. These systems are designed to acquire and to reliably transport shot-by-shot data at a peak throughput of 5?GB/s to the offline data storage where experimental data and the relevant metadata are archived and made available for user analysis. The analysis and monitoring implementation (AMI) and Photon Science ANAlysis (psana) software packages are described. Psana is open source and freely available.
{"title":"Data systems for the Linac coherent light source","authors":"J. Thayer, D. Damiani, C. Ford, M. Dubrovin, I. Gaponenko, C. P. O’Grady, W. Kroeger, J. Pines, T. J. Lane, A. Salnikov, D. Schneider, T. Tookey, M. Weaver, C. H. Yoon, A. Perazzo","doi":"10.1186/s40679-016-0037-7","DOIUrl":"https://doi.org/10.1186/s40679-016-0037-7","url":null,"abstract":"<p>The data systems for X-ray free-electron laser (FEL) experiments at the Linac coherent light source (LCLS) are described. These systems are designed to acquire and to reliably transport shot-by-shot data at a peak throughput of 5?GB/s to the offline data storage where experimental data and the relevant metadata are archived and made available for user analysis. The analysis and monitoring implementation (AMI) and Photon Science ANAlysis (psana) software packages are described. Psana is open source and freely available.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"3 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2017-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-016-0037-7","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4570758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-03DOI: 10.1186/s40679-016-0035-9
Federica Marone, Alain Studer, Heiner Billich, Leonardo Sala, Marco Stampanoni
Sub-second full-field tomographic microscopy at third-generation synchrotron sources is a reality, opening up new possibilities for the study of dynamic systems in different fields. Sustained elevated data rates of multiple GB/s in tomographic experiments will become even more common at diffraction-limited storage rings, coming in operation soon. The computational tools necessary for the post-processing of raw tomographic projections have generally not experienced the same efficiency increase as the experimental facilities, hindering optimal exploitation of this new potential. We present here a fast, flexible, and user-friendly post-processing pipeline overcoming this efficiency mismatch and delivering reconstructed tomographic datasets just few seconds after the data have been acquired, enabling fast parameter and image quality evaluation as well as efficient post-processing of TBs of tomographic data. With this new tool, also able to accept a stream of data directly from a detector, few selected tomographic slices are available in less than half a second, providing advanced previewing capabilities paving the way to new concepts for on-the-fly control of dynamic experiments.
{"title":"Towards on-the-fly data post-processing for real-time tomographic imaging at TOMCAT","authors":"Federica Marone, Alain Studer, Heiner Billich, Leonardo Sala, Marco Stampanoni","doi":"10.1186/s40679-016-0035-9","DOIUrl":"https://doi.org/10.1186/s40679-016-0035-9","url":null,"abstract":"<p>Sub-second full-field tomographic microscopy at third-generation synchrotron sources is a reality, opening up new possibilities for the study of dynamic systems in different fields. Sustained elevated data rates of multiple GB/s in tomographic experiments will become even more common at diffraction-limited storage rings, coming in operation soon. The computational tools necessary for the post-processing of raw tomographic projections have generally not experienced the same efficiency increase as the experimental facilities, hindering optimal exploitation of this new potential. We present here a fast, flexible, and user-friendly post-processing pipeline overcoming this efficiency mismatch and delivering reconstructed tomographic datasets just few seconds after the data have been acquired, enabling fast parameter and image quality evaluation as well as efficient post-processing of TBs of tomographic data. With this new tool, also able to accept a stream of data directly from a detector, few selected tomographic slices are available in less than half a second, providing advanced previewing capabilities paving the way to new concepts for on-the-fly control of dynamic experiments.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"3 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2017-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-016-0035-9","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4465196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-01-03DOI: 10.1186/s40679-016-0034-x
W. A. Moeglein, R. Griswold, B. L. Mehdi, N. D. Browning, J. Teuton
In situ scanning transmission electron microscopy is being developed for numerous applications in the study of nucleation and growth under electrochemical driving forces. For this type of experiment, one of the key parameters is to identify when nucleation initiates. Typically, the process of identifying the moment that crystals begin to form is a manual process requiring the user to perform an observation and respond accordingly (adjust focus, magnification, translate the stage, etc.). However, as the speed of the cameras being used to perform these observations increases, the ability of a user to “catch” the important initial stage of nucleation decreases (there is more information that is available in the first few milliseconds of the process). Here, we show that video shot boundary detection can automatically detect frames where a change in the image occurs. We show that this method can be applied to quickly and accurately identify points of change during crystal growth. This technique allows for automated segmentation of a digital stream for further analysis and the assignment of arbitrary time stamps for the initiation of processes that are independent of the user’s ability to observe and react.
{"title":"Applying shot boundary detection for automated crystal growth analysis during in situ transmission electron microscope experiments","authors":"W. A. Moeglein, R. Griswold, B. L. Mehdi, N. D. Browning, J. Teuton","doi":"10.1186/s40679-016-0034-x","DOIUrl":"https://doi.org/10.1186/s40679-016-0034-x","url":null,"abstract":"<p>In situ scanning transmission electron microscopy is being developed for numerous applications in the study of nucleation and growth under electrochemical driving forces. For this type of experiment, one of the key parameters is to identify when nucleation initiates. Typically, the process of identifying the moment that crystals begin to form is a manual process requiring the user to perform an observation and respond accordingly (adjust focus, magnification, translate the stage, etc.). However, as the speed of the cameras being used to perform these observations increases, the ability of a user to “catch” the important initial stage of nucleation decreases (there is more information that is available in the first few milliseconds of the process). Here, we show that video shot boundary detection can automatically detect frames where a change in the image occurs. We show that this method can be applied to quickly and accurately identify points of change during crystal growth. This technique allows for automated segmentation of a digital stream for further analysis and the assignment of arbitrary time stamps for the initiation of processes that are independent of the user’s ability to observe and react.</p>","PeriodicalId":460,"journal":{"name":"Advanced Structural and Chemical Imaging","volume":"3 1","pages":""},"PeriodicalIF":3.56,"publicationDate":"2017-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1186/s40679-016-0034-x","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"4465193","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}