In this study, advanced exergy and exergoeconomic analysis are applied to an Organic Rankine Cycle (ORC) for waste heat recovery to identify the potential for thermodynamic and economic improvement of the system (splitting the decision variables into avoidable/unavoidable parts) and the interdependencies between the components (endogenous and exogenous parts). For the first time, the advanced analysis has been applied under different conditions: constant heat rate supplied to the ORC or constant power generated by the ORC. The system simulation was performed in Matlab. The results show that the interactions among components of the ORC system are not strong; therefore, the approach of component-by-component optimization can be applied. The evaporator and condenser are important components to be improved from both thermodynamic and cost perspectives. The advanced exergoeconomic (graphical) optimization of these components indicates that the minimum temperature difference in the evaporator should be increased while the minimum temperature difference in the condenser should be decreased. The optimization results show that the exergetic efficiency of the ORC system can be improved from 27.1% to 27.7%, while the cost of generated electricity decreased from 18.14 USD/GJ to 18.09 USD/GJ.
{"title":"Advanced Exergy-Based Analysis of an Organic Rankine Cycle (ORC) for Waste Heat Recovery.","authors":"Zineb Fergani, Tatiana Morosuk","doi":"10.3390/e25101475","DOIUrl":"10.3390/e25101475","url":null,"abstract":"<p><p>In this study, advanced exergy and exergoeconomic analysis are applied to an Organic Rankine Cycle (ORC) for waste heat recovery to identify the potential for thermodynamic and economic improvement of the system (splitting the decision variables into avoidable/unavoidable parts) and the interdependencies between the components (endogenous and exogenous parts). For the first time, the advanced analysis has been applied under different conditions: constant heat rate supplied to the ORC or constant power generated by the ORC. The system simulation was performed in Matlab. The results show that the interactions among components of the ORC system are not strong; therefore, the approach of component-by-component optimization can be applied. The evaporator and condenser are important components to be improved from both thermodynamic and cost perspectives. The advanced exergoeconomic (graphical) optimization of these components indicates that the minimum temperature difference in the evaporator should be increased while the minimum temperature difference in the condenser should be decreased. The optimization results show that the exergetic efficiency of the ORC system can be improved from 27.1% to 27.7%, while the cost of generated electricity decreased from 18.14 USD/GJ to 18.09 USD/GJ.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"25 10","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-10-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606046/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"61561537","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We consider unimodal time series forecasting. We propose Gaussian and Lerch models for this forecasting problem. The Gaussian model depends on three parameters and the Lerch model depends on four parameters. We estimate the unknown parameters by minimizing the sum of the absolute values of the residuals. We solve these minimizations with and without a weighted median and we compare both approaches. As a numerical application, we consider the daily infections of COVID-19 in China using the Gaussian and Lerch models. We derive a confident interval for the daily infections from each local minima.
{"title":"Gaussian and Lerch Models for Unimodal Time Series Forcasting.","authors":"Azzouz Dermoune, Daoud Ounaissi, Yousri Slaoui","doi":"10.3390/e25101474","DOIUrl":"10.3390/e25101474","url":null,"abstract":"We consider unimodal time series forecasting. We propose Gaussian and Lerch models for this forecasting problem. The Gaussian model depends on three parameters and the Lerch model depends on four parameters. We estimate the unknown parameters by minimizing the sum of the absolute values of the residuals. We solve these minimizations with and without a weighted median and we compare both approaches. As a numerical application, we consider the daily infections of COVID-19 in China using the Gaussian and Lerch models. We derive a confident interval for the daily infections from each local minima.","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"25 10","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606826/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"61561621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The channel-hopping-based rendezvous is essential to alleviate the problem of under-utilization and scarcity of the spectrum in cognitive radio networks. It dynamically allows unlicensed secondary users to schedule rendezvous channels using the assigned hopping sequence to guarantee the self-organization property in a limited time. In this paper, we use the interleaving technique to cleverly construct a set of asynchronous channel-hopping sequences consisting of d sequences of period xN2 with flexible parameters, which can generate sequences of different lengths. By this advantage, the new designed CHSs can be used to adapt to the demands of various communication scenarios. Furthermore, we focus on the improved maximum-time-to-rendezvous and maximum-first-time-to-rendezvous performance of the new construction compared to the prior research at the same sequence length. The new channel-hopping sequences ensure that rendezvous occurs between any two sequences and the rendezvous times are random and unpredictable when using licensed channels under asynchronous access, although the full degree-of-rendezvous is not satisfied. Our simulation results show that the new construction is more balanced and unpredictable between the maximum-time-to-rendezvous and the mean and variance of time-to-rendezvous.
{"title":"New Construction of Asynchronous Channel Hopping Sequences in Cognitive Radio Networks.","authors":"Yaoxuan Wang, Xianhua Niu, Chao Qi, Zhihang He, Bosen Zeng","doi":"10.3390/e25101473","DOIUrl":"10.3390/e25101473","url":null,"abstract":"<p><p>The channel-hopping-based rendezvous is essential to alleviate the problem of under-utilization and scarcity of the spectrum in cognitive radio networks. It dynamically allows unlicensed secondary users to schedule rendezvous channels using the assigned hopping sequence to guarantee the self-organization property in a limited time. In this paper, we use the interleaving technique to cleverly construct a set of asynchronous channel-hopping sequences consisting of <i>d</i> sequences of period xN2 with flexible parameters, which can generate sequences of different lengths. By this advantage, the new designed CHSs can be used to adapt to the demands of various communication scenarios. Furthermore, we focus on the improved maximum-time-to-rendezvous and maximum-first-time-to-rendezvous performance of the new construction compared to the prior research at the same sequence length. The new channel-hopping sequences ensure that rendezvous occurs between any two sequences and the rendezvous times are random and unpredictable when using licensed channels under asynchronous access, although the full degree-of-rendezvous is not satisfied. Our simulation results show that the new construction is more balanced and unpredictable between the maximum-time-to-rendezvous and the mean and variance of time-to-rendezvous.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"25 10","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606140/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"61561638","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present a novel information-theoretic framework, termed as TURBO, designed to systematically analyse and generalise auto-encoding methods. We start by examining the principles of information bottleneck and bottleneck-based networks in the auto-encoding setting and identifying their inherent limitations, which become more prominent for data with multiple relevant, physics-related representations. The TURBO framework is then introduced, providing a comprehensive derivation of its core concept consisting of the maximisation of mutual information between various data representations expressed in two directions reflecting the information flows. We illustrate that numerous prevalent neural network models are encompassed within this framework. The paper underscores the insufficiency of the information bottleneck concept in elucidating all such models, thereby establishing TURBO as a preferable theoretical reference. The introduction of TURBO contributes to a richer understanding of data representation and the structure of neural network models, enabling more efficient and versatile applications.
{"title":"TURBO: The Swiss Knife of Auto-Encoders.","authors":"Guillaume Quétant, Yury Belousov, Vitaliy Kinakh, Slava Voloshynovskiy","doi":"10.3390/e25101471","DOIUrl":"10.3390/e25101471","url":null,"abstract":"<p><p>We present a novel information-theoretic framework, termed as TURBO, designed to systematically analyse and generalise auto-encoding methods. We start by examining the principles of information bottleneck and bottleneck-based networks in the auto-encoding setting and identifying their inherent limitations, which become more prominent for data with multiple relevant, physics-related representations. The TURBO framework is then introduced, providing a comprehensive derivation of its core concept consisting of the maximisation of mutual information between various data representations expressed in two directions reflecting the information flows. We illustrate that numerous prevalent neural network models are encompassed within this framework. The paper underscores the insufficiency of the information bottleneck concept in elucidating all such models, thereby establishing TURBO as a preferable theoretical reference. The introduction of TURBO contributes to a richer understanding of data representation and the structure of neural network models, enabling more efficient and versatile applications.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"25 10","pages":""},"PeriodicalIF":2.1,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606332/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"61561680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Link prediction remains paramount in knowledge graph embedding (KGE), aiming to discern obscured or non-manifest relationships within a given knowledge graph (KG). Despite the critical nature of this endeavor, contemporary methodologies grapple with notable constraints, predominantly in terms of computational overhead and the intricacy of encapsulating multifaceted relationships. This paper introduces a sophisticated approach that amalgamates convolutional operators with pertinent graph structural information. By meticulously integrating information pertinent to entities and their immediate relational neighbors, we enhance the performance of the convolutional model, culminating in an averaged embedding ensuing from the convolution across entities and their proximal nodes. Significantly, our methodology presents a distinctive avenue, facilitating the inclusion of edge-specific data into the convolutional model's input, thus endowing users with the latitude to calibrate the model's architecture and parameters congruent with their specific dataset. Empirical evaluations underscore the ascendancy of our proposition over extant convolution-based link prediction benchmarks, particularly evident across the FB15k, WN18, and YAGO3-10 datasets. The primary objective of this research lies in forging KGE link prediction methodologies imbued with heightened efficiency and adeptness, thereby addressing salient challenges inherent to real-world applications.
{"title":"Convolutional Models with Multi-Feature Fusion for Effective Link Prediction in Knowledge Graph Embedding.","authors":"Qinglang Guo, Yong Liao, Zhe Li, Hui Lin, Shenglin Liang","doi":"10.3390/e25101472","DOIUrl":"10.3390/e25101472","url":null,"abstract":"<p><p>Link prediction remains paramount in knowledge graph embedding (KGE), aiming to discern obscured or non-manifest relationships within a given knowledge graph (KG). Despite the critical nature of this endeavor, contemporary methodologies grapple with notable constraints, predominantly in terms of computational overhead and the intricacy of encapsulating multifaceted relationships. This paper introduces a sophisticated approach that amalgamates convolutional operators with pertinent graph structural information. By meticulously integrating information pertinent to entities and their immediate relational neighbors, we enhance the performance of the convolutional model, culminating in an averaged embedding ensuing from the convolution across entities and their proximal nodes. Significantly, our methodology presents a distinctive avenue, facilitating the inclusion of edge-specific data into the convolutional model's input, thus endowing users with the latitude to calibrate the model's architecture and parameters congruent with their specific dataset. Empirical evaluations underscore the ascendancy of our proposition over extant convolution-based link prediction benchmarks, particularly evident across the FB15k, WN18, and YAGO3-10 datasets. The primary objective of this research lies in forging KGE link prediction methodologies imbued with heightened efficiency and adeptness, thereby addressing salient challenges inherent to real-world applications.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"25 10","pages":""},"PeriodicalIF":2.1,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606879/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"61561599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In cases where a client suffers from completely unlabeled data, unsupervised learning has difficulty achieving an accurate fault diagnosis. Semi-supervised federated learning with the ability for interaction between a labeled client and an unlabeled client has been developed to overcome this difficulty. However, the existing semi-supervised federated learning methods may lead to a negative transfer problem since they fail to filter out unreliable model information from the unlabeled client. Therefore, in this study, a dynamic semi-supervised federated learning fault diagnosis method with an attention mechanism (SSFL-ATT) is proposed to prevent the federation model from experiencing negative transfer. A federation strategy driven by an attention mechanism was designed to filter out the unreliable information hidden in the local model. SSFL-ATT can ensure the federation model's performance as well as render the unlabeled client capable of fault classification. In cases where there is an unlabeled client, compared to the existing semi-supervised federated learning methods, SSFL-ATT can achieve increments of 9.06% and 12.53% in fault diagnosis accuracy when datasets provided by Case Western Reserve University and Shanghai Maritime University, respectively, are used for verification.
{"title":"Dynamic Semi-Supervised Federated Learning Fault Diagnosis Method Based on an Attention Mechanism.","authors":"Shun Liu, Funa Zhou, Shanjie Tang, Xiong Hu, Chaoge Wang, Tianzhen Wang","doi":"10.3390/e25101470","DOIUrl":"10.3390/e25101470","url":null,"abstract":"<p><p>In cases where a client suffers from completely unlabeled data, unsupervised learning has difficulty achieving an accurate fault diagnosis. Semi-supervised federated learning with the ability for interaction between a labeled client and an unlabeled client has been developed to overcome this difficulty. However, the existing semi-supervised federated learning methods may lead to a negative transfer problem since they fail to filter out unreliable model information from the unlabeled client. Therefore, in this study, a dynamic semi-supervised federated learning fault diagnosis method with an attention mechanism (SSFL-ATT) is proposed to prevent the federation model from experiencing negative transfer. A federation strategy driven by an attention mechanism was designed to filter out the unreliable information hidden in the local model. SSFL-ATT can ensure the federation model's performance as well as render the unlabeled client capable of fault classification. In cases where there is an unlabeled client, compared to the existing semi-supervised federated learning methods, SSFL-ATT can achieve increments of 9.06% and 12.53% in fault diagnosis accuracy when datasets provided by Case Western Reserve University and Shanghai Maritime University, respectively, are used for verification.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"25 10","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606357/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"61561607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Denoising diffusion probabilistic models are a promising new class of generative models that mark a milestone in high-quality image generation. This paper showcases their ability to sequentially generate video, surpassing prior methods in perceptual and probabilistic forecasting metrics. We propose an autoregressive, end-to-end optimized video diffusion model inspired by recent advances in neural video compression. The model successively generates future frames by correcting a deterministic next-frame prediction using a stochastic residual generated by an inverse diffusion process. We compare this approach against six baselines on four datasets involving natural and simulation-based videos. We find significant improvements in terms of perceptual quality and probabilistic frame forecasting ability for all datasets.
{"title":"Diffusion Probabilistic Modeling for Video Generation.","authors":"Ruihan Yang, Prakhar Srivastava, Stephan Mandt","doi":"10.3390/e25101469","DOIUrl":"10.3390/e25101469","url":null,"abstract":"<p><p>Denoising diffusion probabilistic models are a promising new class of generative models that mark a milestone in high-quality image generation. This paper showcases their ability to sequentially generate video, surpassing prior methods in perceptual and probabilistic forecasting metrics. We propose an autoregressive, end-to-end optimized video diffusion model inspired by recent advances in neural video compression. The model successively generates future frames by correcting a deterministic next-frame prediction using a stochastic residual generated by an inverse diffusion process. We compare this approach against six baselines on four datasets involving natural and simulation-based videos. We find significant improvements in terms of perceptual quality and probabilistic frame forecasting ability for all datasets.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"25 10","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606505/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"61561604","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Variational inference provides a way to approximate probability densities through optimization. It does so by optimizing an upper or a lower bound of the likelihood of the observed data (the evidence). The classic variational inference approach suggests maximizing the Evidence Lower Bound (ELBO). Recent studies proposed to optimize the variational Rényi bound (VR) and the χ upper bound. However, these estimates, which are based on the Monte Carlo (MC) approximation, either underestimate the bound or exhibit a high variance. In this work, we introduce a new upper bound, termed the Variational Rényi Log Upper bound (VRLU), which is based on the existing VR bound. In contrast to the existing VR bound, the MC approximation of the VRLU bound maintains the upper bound property. Furthermore, we devise a (sandwiched) upper-lower bound variational inference method, termed the Variational Rényi Sandwich (VRS), to jointly optimize the upper and lower bounds. We present a set of experiments, designed to evaluate the new VRLU bound and to compare the VRS method with the classic Variational Autoencoder (VAE) and the VR methods. Next, we apply the VRS approximation to the Multiple-Source Adaptation problem (MSA). MSA is a real-world scenario where data are collected from multiple sources that differ from one another by their probability distribution over the input space. The main aim is to combine fairly accurate predictive models from these sources and create an accurate model for new, mixed target domains. However, many domain adaptation methods assume prior knowledge of the data distribution in the source domains. In this work, we apply the suggested VRS density estimate to the Multiple-Source Adaptation problem (MSA) and show, both theoretically and empirically, that it provides tighter error bounds and improved performance, compared to leading MSA methods.
{"title":"Variational Inference via Rényi Bound Optimization and Multiple-Source Adaptation.","authors":"Dana Zalman Oshri, Shai Fine","doi":"10.3390/e25101468","DOIUrl":"10.3390/e25101468","url":null,"abstract":"<p><p>Variational inference provides a way to approximate probability densities through optimization. It does so by optimizing an upper or a lower bound of the likelihood of the observed data (the evidence). The classic variational inference approach suggests maximizing the Evidence Lower Bound (ELBO). Recent studies proposed to optimize the variational Rényi bound (VR) and the χ upper bound. However, these estimates, which are based on the Monte Carlo (MC) approximation, either underestimate the bound or exhibit a high variance. In this work, we introduce a new upper bound, termed the Variational Rényi Log Upper bound (VRLU), which is based on the existing VR bound. In contrast to the existing VR bound, the MC approximation of the VRLU bound maintains the upper bound property. Furthermore, we devise a (sandwiched) upper-lower bound variational inference method, termed the Variational Rényi Sandwich (VRS), to jointly optimize the upper and lower bounds. We present a set of experiments, designed to evaluate the new VRLU bound and to compare the VRS method with the classic Variational Autoencoder (VAE) and the VR methods. Next, we apply the VRS approximation to the Multiple-Source Adaptation problem (MSA). MSA is a real-world scenario where data are collected from multiple sources that differ from one another by their probability distribution over the input space. The main aim is to combine fairly accurate predictive models from these sources and create an accurate model for new, mixed target domains. However, many domain adaptation methods assume prior knowledge of the data distribution in the source domains. In this work, we apply the suggested VRS density estimate to the Multiple-Source Adaptation problem (MSA) and show, both theoretically and empirically, that it provides tighter error bounds and improved performance, compared to leading MSA methods.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"25 10","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606691/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"61561683","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Armando Adrián Miranda-González, Alberto Jorge Rosales-Silva, Dante Mújica-Vargas, Ponciano Jorge Escamilla-Ambrosio, Francisco Javier Gallegos-Funes, Jean Marie Vianney-Kinani, Erick Velázquez-Lozada, Luis Manuel Pérez-Hernández, Lucero Verónica Lozano-Vázquez
Noise suppression algorithms have been used in various tasks such as computer vision, industrial inspection, and video surveillance, among others. The robust image processing systems need to be fed with images closer to a real scene; however, sometimes, due to external factors, the data that represent the image captured are altered, which is translated into a loss of information. In this way, there are required procedures to recover data information closest to the real scene. This research project proposes a Denoising Vanilla Autoencoding (DVA) architecture by means of unsupervised neural networks for Gaussian denoising in color and grayscale images. The methodology improves other state-of-the-art architectures by means of objective numerical results. Additionally, a validation set and a high-resolution noisy image set are used, which reveal that our proposal outperforms other types of neural networks responsible for suppressing noise in images.
{"title":"Denoising Vanilla Autoencoder for RGB and GS Images with Gaussian Noise.","authors":"Armando Adrián Miranda-González, Alberto Jorge Rosales-Silva, Dante Mújica-Vargas, Ponciano Jorge Escamilla-Ambrosio, Francisco Javier Gallegos-Funes, Jean Marie Vianney-Kinani, Erick Velázquez-Lozada, Luis Manuel Pérez-Hernández, Lucero Verónica Lozano-Vázquez","doi":"10.3390/e25101467","DOIUrl":"10.3390/e25101467","url":null,"abstract":"<p><p>Noise suppression algorithms have been used in various tasks such as computer vision, industrial inspection, and video surveillance, among others. The robust image processing systems need to be fed with images closer to a real scene; however, sometimes, due to external factors, the data that represent the image captured are altered, which is translated into a loss of information. In this way, there are required procedures to recover data information closest to the real scene. This research project proposes a Denoising Vanilla Autoencoding (<i>DVA</i>) architecture by means of unsupervised neural networks for Gaussian denoising in color and grayscale images. The methodology improves other state-of-the-art architectures by means of objective numerical results. Additionally, a validation set and a high-resolution noisy image set are used, which reveal that our proposal outperforms other types of neural networks responsible for suppressing noise in images.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"25 10","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-10-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606544/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"61561603","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Domingos Aguiar, Rômulo Simões Cezar Menezes, Antonio Celso Dantas Antonino, Tatijana Stosic, Ana M Tarquis, Borko Stosic
The conversion of native forest into agricultural land, which is common in many parts of the world, poses important questions regarding soil degradation, demanding further efforts to better understand the effect of land use change on soil functions. With the advent of 3D computed tomography techniques and computing power, new methods are becoming available to address this question. In this direction, in the current work we implement a modification of the Fisher-Shannon method, borrowed from information theory, to quantify the complexity of twelve 3D CT soil samples from a sugarcane plantation and twelve samples from a nearby native Atlantic forest in northeastern Brazil. The distinction found between the samples from the sugar plantation and the Atlantic forest site is quite pronounced. The results at the level of 91.7% accuracy were obtained considering the complexity in the Fisher-Shannon plane. Atlantic forest samples are found to be generally more complex than those from the sugar plantation.
{"title":"Quantifying Soil Complexity Using Fisher Shannon Method on 3D X-ray Computed Tomography Scans.","authors":"Domingos Aguiar, Rômulo Simões Cezar Menezes, Antonio Celso Dantas Antonino, Tatijana Stosic, Ana M Tarquis, Borko Stosic","doi":"10.3390/e25101465","DOIUrl":"10.3390/e25101465","url":null,"abstract":"<p><p>The conversion of native forest into agricultural land, which is common in many parts of the world, poses important questions regarding soil degradation, demanding further efforts to better understand the effect of land use change on soil functions. With the advent of 3D computed tomography techniques and computing power, new methods are becoming available to address this question. In this direction, in the current work we implement a modification of the Fisher-Shannon method, borrowed from information theory, to quantify the complexity of twelve 3D CT soil samples from a sugarcane plantation and twelve samples from a nearby native Atlantic forest in northeastern Brazil. The distinction found between the samples from the sugar plantation and the Atlantic forest site is quite pronounced. The results at the level of 91.7% accuracy were obtained considering the complexity in the Fisher-Shannon plane. Atlantic forest samples are found to be generally more complex than those from the sugar plantation.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"25 10","pages":""},"PeriodicalIF":2.7,"publicationDate":"2023-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10606068/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"61561656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}