The metamer mismatch volume has important applications in color correction, camera design, and light source design. The method based on spherical sampling to calculate the metamer mismatch volume has a long computation time, a large number of duplicated boundary points, too few effective vertices, and the dimension of its metamer set will appear lower than the theoretical dimension. In this paper, we propose a high-dimensional spherical sampling method that samples the metamer set directly, and find all boundary points by selecting direction vectors and polarizing all directions. The experimental results show that our method improves the above problems, the computational speed is faster, the computational results are close, the repetition rate of boundary points is greatly reduced, and the actual dimensionality of the corresponding metamer set is consistent with the theoretical dimensionality.
{"title":"Metamer mismatch volume calculation method based on high-dimensional spherical sampling","authors":"Kuan Xu, Long Ma, Peng Li","doi":"10.1117/12.2687942","DOIUrl":"https://doi.org/10.1117/12.2687942","url":null,"abstract":"The metamer mismatch volume has important applications in color correction, camera design, and light source design. The method based on spherical sampling to calculate the metamer mismatch volume has a long computation time, a large number of duplicated boundary points, too few effective vertices, and the dimension of its metamer set will appear lower than the theoretical dimension. In this paper, we propose a high-dimensional spherical sampling method that samples the metamer set directly, and find all boundary points by selecting direction vectors and polarizing all directions. The experimental results show that our method improves the above problems, the computational speed is faster, the computational results are close, the repetition rate of boundary points is greatly reduced, and the actual dimensionality of the corresponding metamer set is consistent with the theoretical dimensionality.","PeriodicalId":38836,"journal":{"name":"Meta: Avaliacao","volume":"12785 1","pages":"1278503 - 1278503-9"},"PeriodicalIF":0.0,"publicationDate":"2023-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44355698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The spectral reflectance of multispectral images can provide more valuable information about object characteristics. In order to improve the utilization of the spectrum, the reflectance reconstruction requires the same system calibration and illumination of the image acquisition. Therefore, Khan proposed the concept of multispectral constancy, which is to transform the multispectral image data into a standard representation through spectral adaptive transformation. Khan used the linear mapping method to solve SAT to convert the multispectral image data obtained under unknown illumination into the image data under standard light source. In order to further improve the spectral utilization rate and expand the application range of multispectral cameras, an algorithm to improve multispectral constancy based on chromatic aberration index is proposed in this paper. The algorithm uses chromatic aberration as the objective function to solve the spectral adaptive transformation. In this paper, ten light sources are used as unknown light sources, SFU and X-rite are used as training and testing datasets, and multispectral camera channels are simulated by Equi-Gaussian and Equi-Energy filters with different number of channels to train and test 5, 6, 8, and 10 channels of data. In this paper, the color difference under different light sources is used as the evaluation index to test the performance of the proposed algorithm, and compared with the Khan method for calculating SAT multispectral constancy. The experimental results show that the spectral constancy algorithm based on color difference can perform better, and expand the application of different kinds of unknown light sources in multispectral constancy.
{"title":"Study on spectral adaptive transformation based on chromatic aberration","authors":"Long Ma, Haitang Chen","doi":"10.1117/12.2687939","DOIUrl":"https://doi.org/10.1117/12.2687939","url":null,"abstract":"The spectral reflectance of multispectral images can provide more valuable information about object characteristics. In order to improve the utilization of the spectrum, the reflectance reconstruction requires the same system calibration and illumination of the image acquisition. Therefore, Khan proposed the concept of multispectral constancy, which is to transform the multispectral image data into a standard representation through spectral adaptive transformation. Khan used the linear mapping method to solve SAT to convert the multispectral image data obtained under unknown illumination into the image data under standard light source. In order to further improve the spectral utilization rate and expand the application range of multispectral cameras, an algorithm to improve multispectral constancy based on chromatic aberration index is proposed in this paper. The algorithm uses chromatic aberration as the objective function to solve the spectral adaptive transformation. In this paper, ten light sources are used as unknown light sources, SFU and X-rite are used as training and testing datasets, and multispectral camera channels are simulated by Equi-Gaussian and Equi-Energy filters with different number of channels to train and test 5, 6, 8, and 10 channels of data. In this paper, the color difference under different light sources is used as the evaluation index to test the performance of the proposed algorithm, and compared with the Khan method for calculating SAT multispectral constancy. The experimental results show that the spectral constancy algorithm based on color difference can perform better, and expand the application of different kinds of unknown light sources in multispectral constancy.","PeriodicalId":38836,"journal":{"name":"Meta: Avaliacao","volume":"12785 1","pages":"1278505 - 1278505-16"},"PeriodicalIF":0.0,"publicationDate":"2023-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46590887","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To improve the color reproduction and realism of digital cameras and to promote the development of computer vision. Camera colorimetry is conditioned on the spectral sensitivity response of the camera being a linear transformation of the color matching function of the human visual system. Previous methods have proposed placing well-designed filters in front of the camera to produce a sensitivity that well matches the Luther condition. In this paper, we optimize the latest matching illumination method (by using a spectral-tunable illumination system to modulate the spectrum of certain light sources), improve the method of designing filters and add new constraints. Experiments demonstrate that the matching illumination method using new objective functions give a 5% improvement over the original method, and the optimization of the filter using a gradient ascent algorithm and a genetic algorithm gives a 10% improvement in chromaticity over the original method. The method of limiting the average transmittance also has a 10% improvement over the previous one. As a result, these methods can make the imaging of digital cameras more accurate and realistic.
{"title":"Improved color accuracy of the camera using optimized matching illumination method","authors":"Long Ma, Jing Chen","doi":"10.1117/12.2687940","DOIUrl":"https://doi.org/10.1117/12.2687940","url":null,"abstract":"To improve the color reproduction and realism of digital cameras and to promote the development of computer vision. Camera colorimetry is conditioned on the spectral sensitivity response of the camera being a linear transformation of the color matching function of the human visual system. Previous methods have proposed placing well-designed filters in front of the camera to produce a sensitivity that well matches the Luther condition. In this paper, we optimize the latest matching illumination method (by using a spectral-tunable illumination system to modulate the spectrum of certain light sources), improve the method of designing filters and add new constraints. Experiments demonstrate that the matching illumination method using new objective functions give a 5% improvement over the original method, and the optimization of the filter using a gradient ascent algorithm and a genetic algorithm gives a 10% improvement in chromaticity over the original method. The method of limiting the average transmittance also has a 10% improvement over the previous one. As a result, these methods can make the imaging of digital cameras more accurate and realistic.","PeriodicalId":38836,"journal":{"name":"Meta: Avaliacao","volume":"12785 1","pages":"1278506 - 1278506-10"},"PeriodicalIF":0.0,"publicationDate":"2023-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46596809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Qing Li, L. Wei, Xin Qu, Kai Cheng, Yanbo Chang, Houle Zhou
With the rapid development of the transportation industry, railway transportation plays a crucial role. Manual inspection methods are time-consuming, labor-intensive, and highly subjective. Therefore, there is an urgent need for a more efficient and accurate flaw detection method. This system is a portable rail flaw detection device based on machine vision, with YOLOv5 as its core deep learning algorithm. The system captures surface images of the rail through a camera and transmits them in real-time to the host computer for analysis. Leveraging the powerful real-time object detection capability of YOLOv5s, the system can accurately identify and locate various types of rail surface damages, such as cracks, fractures, and wear. Compared to traditional manual inspection, this system is more efficient and greatly improves the accuracy and efficiency of rail flaw detection. It has a smaller size and is convenient to carry, making it suitable for working in various environments and conditions, greatly enhancing the practicality and flexibility of the device.
{"title":"Machine vision-based portable track inspection system","authors":"Qing Li, L. Wei, Xin Qu, Kai Cheng, Yanbo Chang, Houle Zhou","doi":"10.1117/12.2687936","DOIUrl":"https://doi.org/10.1117/12.2687936","url":null,"abstract":"With the rapid development of the transportation industry, railway transportation plays a crucial role. Manual inspection methods are time-consuming, labor-intensive, and highly subjective. Therefore, there is an urgent need for a more efficient and accurate flaw detection method. This system is a portable rail flaw detection device based on machine vision, with YOLOv5 as its core deep learning algorithm. The system captures surface images of the rail through a camera and transmits them in real-time to the host computer for analysis. Leveraging the powerful real-time object detection capability of YOLOv5s, the system can accurately identify and locate various types of rail surface damages, such as cracks, fractures, and wear. Compared to traditional manual inspection, this system is more efficient and greatly improves the accuracy and efficiency of rail flaw detection. It has a smaller size and is convenient to carry, making it suitable for working in various environments and conditions, greatly enhancing the practicality and flexibility of the device.","PeriodicalId":38836,"journal":{"name":"Meta: Avaliacao","volume":"12785 1","pages":"1278502 - 1278502-8"},"PeriodicalIF":0.0,"publicationDate":"2023-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45148061","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Spectral Reconstruction (SR) algorithm attempts to recover hyperspectral information from RGB camera responses. This estimation problem is usually formulated as a least squares regression, and because the data is noisy, Tikhonov regularization is reconsidered. The degree of regularization is controlled by a single penalty parameter. This paper improves the traditional cross validation experiment method for the optimization of this parameter. In addition, this article also proposes an improved SR model. Unlike common SR models, our method divides the processed RGB space into different numbers of neighborhoods and determines the center point of each neighborhood. Finally, the adjacent RGB data and spectral data of each center point are used as input and output data for the Radial Basis Function Network (RBFN) model to train the SR regression of each RGB neighborhood. This article selects MRAE and RMSE to evaluate the performance of the SR algorithm. Through comparison with different SR models, the methods proposed in this article have significant performance improvements.
{"title":"Optimization of RGB image spectral reconstruction based on radial basis function networks","authors":"Long Ma, Zhipeng Qian","doi":"10.1117/12.2687949","DOIUrl":"https://doi.org/10.1117/12.2687949","url":null,"abstract":"The Spectral Reconstruction (SR) algorithm attempts to recover hyperspectral information from RGB camera responses. This estimation problem is usually formulated as a least squares regression, and because the data is noisy, Tikhonov regularization is reconsidered. The degree of regularization is controlled by a single penalty parameter. This paper improves the traditional cross validation experiment method for the optimization of this parameter. In addition, this article also proposes an improved SR model. Unlike common SR models, our method divides the processed RGB space into different numbers of neighborhoods and determines the center point of each neighborhood. Finally, the adjacent RGB data and spectral data of each center point are used as input and output data for the Radial Basis Function Network (RBFN) model to train the SR regression of each RGB neighborhood. This article selects MRAE and RMSE to evaluate the performance of the SR algorithm. Through comparison with different SR models, the methods proposed in this article have significant performance improvements.","PeriodicalId":38836,"journal":{"name":"Meta: Avaliacao","volume":"12785 1","pages":"1278507 - 1278507-11"},"PeriodicalIF":0.0,"publicationDate":"2023-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46538733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The choice of light source affects the accuracy of the spectral sensitivity estimation. In this paper, we propose to estimate the spectral sensitivity function of digital camera using spectrally tunable LED light sources. The spectral power distribution of the LED light source is determined by a combination of multiple LEDs and their weight coefficients. The method of tuning the weight coefficients of the LEDs includes Monte Carlo method and particle swarm optimization algorithm, so that the LED light source with the smallest estimation error is defined as the optimal light source. Experimental results show that the particle swarm algorithm gives the best estimation results. The relative error of estimation using LED light sources is significantly reduced when compared with the results when using a single light source for estimation (e.g., D65 light source).
{"title":"Camera spectral sensitivity estimation based on spectrally tunable LED illumination","authors":"Long Ma, Bowen Xu","doi":"10.1117/12.2687937","DOIUrl":"https://doi.org/10.1117/12.2687937","url":null,"abstract":"The choice of light source affects the accuracy of the spectral sensitivity estimation. In this paper, we propose to estimate the spectral sensitivity function of digital camera using spectrally tunable LED light sources. The spectral power distribution of the LED light source is determined by a combination of multiple LEDs and their weight coefficients. The method of tuning the weight coefficients of the LEDs includes Monte Carlo method and particle swarm optimization algorithm, so that the LED light source with the smallest estimation error is defined as the optimal light source. Experimental results show that the particle swarm algorithm gives the best estimation results. The relative error of estimation using LED light sources is significantly reduced when compared with the results when using a single light source for estimation (e.g., D65 light source).","PeriodicalId":38836,"journal":{"name":"Meta: Avaliacao","volume":"12785 1","pages":"1278504 - 1278504-13"},"PeriodicalIF":0.0,"publicationDate":"2023-08-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44279416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Blockchain systems based on a reusable resource, such as proof-of-stake (PoS), provide weaker security guarantees than those based on proof-of-work. Specifically, they are vulnerable to long-range attacks, where an adversary can corrupt prior participants in order to rewrite the full history of the chain. To prevent this attack on a PoS chain, we propose a protocol that checkpoints the state of the PoS chain to a proof-of-work blockchain such as Bitcoin. Our checkpointing protocol hence does not rely on any central authority. Our work uses Schnorr signatures and leverages Bitcoin recent Taproot upgrade, allowing us to create a checkpointing transaction of constant size. We argue for the security of our protocol and present an open-source implementation that was tested on the Bitcoin testnet.
{"title":"Pikachu","authors":"Sarah Azouvi, M. Vukolic","doi":"10.1145/3560829.3563563","DOIUrl":"https://doi.org/10.1145/3560829.3563563","url":null,"abstract":"Blockchain systems based on a reusable resource, such as proof-of-stake (PoS), provide weaker security guarantees than those based on proof-of-work. Specifically, they are vulnerable to long-range attacks, where an adversary can corrupt prior participants in order to rewrite the full history of the chain. To prevent this attack on a PoS chain, we propose a protocol that checkpoints the state of the PoS chain to a proof-of-work blockchain such as Bitcoin. Our checkpointing protocol hence does not rely on any central authority. Our work uses Schnorr signatures and leverages Bitcoin recent Taproot upgrade, allowing us to create a checkpointing transaction of constant size. We argue for the security of our protocol and present an open-source implementation that was tested on the Bitcoin testnet.","PeriodicalId":38836,"journal":{"name":"Meta: Avaliacao","volume":"68 37","pages":""},"PeriodicalIF":0.0,"publicationDate":"2022-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72445261","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/HUSTProtools51951.2020.00014
B. Wylie
Performance measurement and analysis of parallel applications is often challenging, despite many excellent commercial and open-source tools being available. Currently envisaged exascale computer systems exacerbate matters by requiring extremely high scalability to effectively exploit millions of processor cores. Unfortunately, significant application execution performance variability arising from increasingly complex interactions between hardware and system software makes this situation much more difficult for application developers and performance analysts alike. This work considers the performance assessment of the HemeLB exascale flagship application code from the EU HPC Centre of Excellence (CoE) for Computational Biomedicine (CompBioMed) running on the SuperMUC-NG Tier-0 leadership system, using the methodology of the Performance Optimisation and Productivity (POP) CoE. Although 80% scaling efficiency is maintained to over 100,000 MPI processes, disappointing initial performance with more processes and corresponding poor strong scaling was identified to originate from the same few compute nodes in multiple runs, which later system diagnostic checks found had faulty DIMMs and lacklustre performance. Excluding these compute nodes from subsequent runs improved performance of executions with over 300,000 MPI processes by a factor of five, resulting in 190 x speed-up compared to 864 MPI processes. While communication efficiency remains very good up to the largest scale, parallel efficiency is primarily limited by load balance found to be largely due to core-to-core and run-to-run variability from excessive stalls for memory accesses, that affect many HPC systems with Intel Xeon Scalable processors. The POP methodology for this performance diagnosis is demonstrated via a detailed exposition with widely deployed ‘standard’ measurement and analysis tools.
尽管有许多优秀的商业和开源工具可用,但并行应用程序的性能度量和分析通常是具有挑战性的。目前设想的百亿亿次计算机系统需要极高的可扩展性来有效地利用数百万个处理器核心,这使问题更加严重。不幸的是,由于硬件和系统软件之间日益复杂的交互而产生的显著的应用程序执行性能可变性,使得应用程序开发人员和性能分析人员都更加困难。这项工作考虑了在supermu - ng Tier-0领导系统上运行的欧盟高性能计算卓越中心(CoE)计算生物医学(CompBioMed)的HemeLB百亿级旗舰应用程序代码的性能评估,使用了性能优化和生产力(POP) CoE的方法。尽管在超过100,000个MPI进程的情况下保持了80%的扩展效率,但是随着更多的进程和相应的较弱的扩展,令人失望的初始性能被确定为源于多次运行中相同的几个计算节点,后来系统诊断检查发现这些节点有故障的内存和低迷的性能。将这些计算节点从后续运行中排除后,超过30万个MPI进程的执行性能提高了5倍,与864个MPI进程相比,速度提高了190倍。虽然通信效率在最大规模上仍然非常好,但并行效率主要受到负载平衡的限制,这主要是由于内核对内核和运行对运行的可变性,这些可变性来自内存访问的过度停滞,这影响了许多使用英特尔至强可扩展处理器的HPC系统。通过对广泛部署的“标准”测量和分析工具的详细阐述,演示了这种性能诊断的POP方法。
{"title":"Exascale potholes for HPC: Execution performance and variability analysis of the flagship application code HemeLB","authors":"B. Wylie","doi":"10.1109/HUSTProtools51951.2020.00014","DOIUrl":"https://doi.org/10.1109/HUSTProtools51951.2020.00014","url":null,"abstract":"Performance measurement and analysis of parallel applications is often challenging, despite many excellent commercial and open-source tools being available. Currently envisaged exascale computer systems exacerbate matters by requiring extremely high scalability to effectively exploit millions of processor cores. Unfortunately, significant application execution performance variability arising from increasingly complex interactions between hardware and system software makes this situation much more difficult for application developers and performance analysts alike. This work considers the performance assessment of the HemeLB exascale flagship application code from the EU HPC Centre of Excellence (CoE) for Computational Biomedicine (CompBioMed) running on the SuperMUC-NG Tier-0 leadership system, using the methodology of the Performance Optimisation and Productivity (POP) CoE. Although 80% scaling efficiency is maintained to over 100,000 MPI processes, disappointing initial performance with more processes and corresponding poor strong scaling was identified to originate from the same few compute nodes in multiple runs, which later system diagnostic checks found had faulty DIMMs and lacklustre performance. Excluding these compute nodes from subsequent runs improved performance of executions with over 300,000 MPI processes by a factor of five, resulting in 190 x speed-up compared to 864 MPI processes. While communication efficiency remains very good up to the largest scale, parallel efficiency is primarily limited by load balance found to be largely due to core-to-core and run-to-run variability from excessive stalls for memory accesses, that affect many HPC systems with Intel Xeon Scalable processors. The POP methodology for this performance diagnosis is demonstrated via a detailed exposition with widely deployed ‘standard’ measurement and analysis tools.","PeriodicalId":38836,"journal":{"name":"Meta: Avaliacao","volume":"85 1","pages":"59-70"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75823804","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/HUSTProtools51951.2020.00010
M. Pierce, S. Marru
This paper examines scenarios in which science gateways can facilitate access to cloud computing resources to support scientific research using regulated or protected data stored on clouds. Specifically, we discuss the use of science gateways to access Controlled Unclassified Information (CUI), a US regulatory standard that covers a broad range of US federal government-owned or regulated data, and that also provides a useful proxy for other types of sensitive data, such as private sector intellectual property. We focus on the impact of CUI requirements on science gateway platforms that can be used to create and manage science gateway instances. Gateway platforms are centrally operated by gateway platform providers who create and control gateway instances on behalf of gateway providers. Broadly, platforms operate following either a multi-tenant or else a multi-instance pattern. Multi-tenanted science gateway platforms are designed to support multiple science gateways simultaneously, with each gateway as a tenant to a single operational instance of the platform middleware. Multi-instance platforms, on the other hand, provide and manage an entire instance of the science gateway software for each gateway. This paper reviews these two scenarios from the perspective of the Science Gateways Platform as a service (SciGaP), a multitenanted gateway platform based on the open-source Apache Airavata software. We examine requirements for providing multitenanted platforms for CUI gateways and also the requirements for providing the same software as a multi-instance platform. In both cases, we assume the use of CUI-compatible resources from commercial cloud providers. Both approaches are technically feasible but have trade-offs that must be considered.
{"title":"Integrating Science Gateways with Secure Cloud Computing Resources: An Examination of Two Deployment Patterns and Their Requirements","authors":"M. Pierce, S. Marru","doi":"10.1109/HUSTProtools51951.2020.00010","DOIUrl":"https://doi.org/10.1109/HUSTProtools51951.2020.00010","url":null,"abstract":"This paper examines scenarios in which science gateways can facilitate access to cloud computing resources to support scientific research using regulated or protected data stored on clouds. Specifically, we discuss the use of science gateways to access Controlled Unclassified Information (CUI), a US regulatory standard that covers a broad range of US federal government-owned or regulated data, and that also provides a useful proxy for other types of sensitive data, such as private sector intellectual property. We focus on the impact of CUI requirements on science gateway platforms that can be used to create and manage science gateway instances. Gateway platforms are centrally operated by gateway platform providers who create and control gateway instances on behalf of gateway providers. Broadly, platforms operate following either a multi-tenant or else a multi-instance pattern. Multi-tenanted science gateway platforms are designed to support multiple science gateways simultaneously, with each gateway as a tenant to a single operational instance of the platform middleware. Multi-instance platforms, on the other hand, provide and manage an entire instance of the science gateway software for each gateway. This paper reviews these two scenarios from the perspective of the Science Gateways Platform as a service (SciGaP), a multitenanted gateway platform based on the open-source Apache Airavata software. We examine requirements for providing multitenanted platforms for CUI gateways and also the requirements for providing the same software as a multi-instance platform. In both cases, we assume the use of CUI-compatible resources from commercial cloud providers. Both approaches are technically feasible but have trade-offs that must be considered.","PeriodicalId":38836,"journal":{"name":"Meta: Avaliacao","volume":"15 1","pages":"19-26"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81986651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}