首页 > 最新文献

Entropy最新文献

英文 中文
EXIT Charts for Low-Density Algebra-Check Codes.
IF 2.1 3区 物理与天体物理 Q2 PHYSICS, MULTIDISCIPLINARY Pub Date : 2024-12-20 DOI: 10.3390/e26121118
Zuo Tang, Jing Lei, Ying Huang

This paper focuses on the Low-Density Algebra-Check (LDAC) code, a novel low-rate channel code derived from the Low-Density Parity-Check (LDPC) code with expanded algebra-check constraints. A method for optimizing LDAC code design using Extrinsic Information Transfer (EXIT) charts is presented. Firstly, an iterative decoding model for LDAC is established according to its structure, and a method for plotting EXIT curves of the algebra-check node decoder is proposed. Then, the performance of two types of algebra-check nodes under different conditions is analyzed via EXIT curves. Finally, a low-rate LDAC code with enhanced coding gain is constructed, demonstrating the effectiveness of the proposed method.

{"title":"EXIT Charts for Low-Density Algebra-Check Codes.","authors":"Zuo Tang, Jing Lei, Ying Huang","doi":"10.3390/e26121118","DOIUrl":"https://doi.org/10.3390/e26121118","url":null,"abstract":"<p><p>This paper focuses on the Low-Density Algebra-Check (LDAC) code, a novel low-rate channel code derived from the Low-Density Parity-Check (LDPC) code with expanded algebra-check constraints. A method for optimizing LDAC code design using Extrinsic Information Transfer (EXIT) charts is presented. Firstly, an iterative decoding model for LDAC is established according to its structure, and a method for plotting EXIT curves of the algebra-check node decoder is proposed. Then, the performance of two types of algebra-check nodes under different conditions is analyzed via EXIT curves. Finally, a low-rate LDAC code with enhanced coding gain is constructed, demonstrating the effectiveness of the proposed method.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11675586/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142946736","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Machine Learning Advances in High-Entropy Alloys: A Mini-Review.
IF 2.1 3区 物理与天体物理 Q2 PHYSICS, MULTIDISCIPLINARY Pub Date : 2024-12-20 DOI: 10.3390/e26121119
Yibo Sun, Jun Ni

The efficacy of machine learning has increased exponentially over the past decade. The utilization of machine learning to predict and design materials has become a pivotal tool for accelerating materials development. High-entropy alloys are particularly intriguing candidates for exemplifying the potency of machine learning due to their superior mechanical properties, vast compositional space, and intricate chemical interactions. This review examines the general process of developing machine learning models. The advances and new algorithms of machine learning in the field of high-entropy alloys are presented in each part of the process. These advances are based on both improvements in computer algorithms and physical representations that focus on the unique ordering properties of high-entropy alloys. We also show the results of generative models, data augmentation, and transfer learning in high-entropy alloys and conclude with a summary of the challenges still faced in machine learning high-entropy alloys today.

{"title":"Machine Learning Advances in High-Entropy Alloys: A Mini-Review.","authors":"Yibo Sun, Jun Ni","doi":"10.3390/e26121119","DOIUrl":"https://doi.org/10.3390/e26121119","url":null,"abstract":"<p><p>The efficacy of machine learning has increased exponentially over the past decade. The utilization of machine learning to predict and design materials has become a pivotal tool for accelerating materials development. High-entropy alloys are particularly intriguing candidates for exemplifying the potency of machine learning due to their superior mechanical properties, vast compositional space, and intricate chemical interactions. This review examines the general process of developing machine learning models. The advances and new algorithms of machine learning in the field of high-entropy alloys are presented in each part of the process. These advances are based on both improvements in computer algorithms and physical representations that focus on the unique ordering properties of high-entropy alloys. We also show the results of generative models, data augmentation, and transfer learning in high-entropy alloys and conclude with a summary of the challenges still faced in machine learning high-entropy alloys today.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11675871/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142946806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Lossless Image Compression Using Context-Dependent Linear Prediction Based on Mean Absolute Error Minimization.
IF 2.1 3区 物理与天体物理 Q2 PHYSICS, MULTIDISCIPLINARY Pub Date : 2024-12-20 DOI: 10.3390/e26121115
Grzegorz Ulacha, Mirosław Łazoryszczak

This paper presents a method for lossless compression of images with fast decoding time and the option to select encoder parameters for individual image characteristics to increase compression efficiency. The data modeling stage was based on linear and nonlinear prediction, which was complemented by a simple block for removing the context-dependent constant component. The prediction was based on the Iterative Reweighted Least Squares (IRLS) method which allowed the minimization of mean absolute error. Two-stage compression was used to encode prediction errors: an adaptive Golomb and a binary arithmetic coding. High compression efficiency was achieved by using an author's context-switching algorithm, which allows several prediction models tailored to the individual characteristics of each image area. In addition, an analysis of the impact of individual encoder parameters on efficiency and encoding time was conducted, and the efficiency of the proposed solution was shown against competing solutions, showing a 9.1% improvement in the bit average of files for the entire test base compared to JPEG-LS.

{"title":"Lossless Image Compression Using Context-Dependent Linear Prediction Based on Mean Absolute Error Minimization.","authors":"Grzegorz Ulacha, Mirosław Łazoryszczak","doi":"10.3390/e26121115","DOIUrl":"https://doi.org/10.3390/e26121115","url":null,"abstract":"<p><p>This paper presents a method for lossless compression of images with fast decoding time and the option to select encoder parameters for individual image characteristics to increase compression efficiency. The data modeling stage was based on linear and nonlinear prediction, which was complemented by a simple block for removing the context-dependent constant component. The prediction was based on the Iterative Reweighted Least Squares (<i>IRLS</i>) method which allowed the minimization of mean absolute error. Two-stage compression was used to encode prediction errors: an adaptive Golomb and a binary arithmetic coding. High compression efficiency was achieved by using an author's context-switching algorithm, which allows several prediction models tailored to the individual characteristics of each image area. In addition, an analysis of the impact of individual encoder parameters on efficiency and encoding time was conducted, and the efficiency of the proposed solution was shown against competing solutions, showing a 9.1% improvement in the bit average of files for the entire test base compared to <i>JPEG-LS</i>.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11675174/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142946781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Sample Augmentation Using Enhanced Auxiliary Classifier Generative Adversarial Network by Transformer for Railway Freight Train Wheelset Bearing Fault Diagnosis.
IF 2.1 3区 物理与天体物理 Q2 PHYSICS, MULTIDISCIPLINARY Pub Date : 2024-12-20 DOI: 10.3390/e26121113
Jing Zhao, Junfeng Li, Zonghao Yuan, Tianming Mu, Zengqiang Ma, Suyan Liu

Diagnosing faults in wheelset bearings is critical for train safety. The main challenge is that only a limited amount of fault sample data can be obtained during high-speed train operations. This scarcity of samples impacts the training and accuracy of deep learning models for wheelset bearing fault diagnosis. Studies show that the Auxiliary Classifier Generative Adversarial Network (ACGAN) demonstrates promising performance in addressing this issue. However, existing ACGAN models have drawbacks such as complexity, high computational expenses, mode collapse, and vanishing gradients. Aiming to address these issues, this paper presents the Transformer and Auxiliary Classifier Generative Adversarial Network (TACGAN), which increases the diversity, complexity and entropy of generated samples, and maximizes the entropy of the generated samples. The transformer network replaces traditional convolutional neural networks (CNNs), avoiding iterative and convolutional structures, thereby reducing computational expenses. Moreover, an independent classifier is integrated to prevent the coupling problem, where the discriminator is simultaneously identified and classified in the ACGAN. Finally, the Wasserstein distance is employed in the loss function to mitigate mode collapse and vanishing gradients. Experimental results using the train wheelset bearing datasets demonstrate the accuracy and effectiveness of the TACGAN.

{"title":"Sample Augmentation Using Enhanced Auxiliary Classifier Generative Adversarial Network by Transformer for Railway Freight Train Wheelset Bearing Fault Diagnosis.","authors":"Jing Zhao, Junfeng Li, Zonghao Yuan, Tianming Mu, Zengqiang Ma, Suyan Liu","doi":"10.3390/e26121113","DOIUrl":"https://doi.org/10.3390/e26121113","url":null,"abstract":"<p><p>Diagnosing faults in wheelset bearings is critical for train safety. The main challenge is that only a limited amount of fault sample data can be obtained during high-speed train operations. This scarcity of samples impacts the training and accuracy of deep learning models for wheelset bearing fault diagnosis. Studies show that the Auxiliary Classifier Generative Adversarial Network (ACGAN) demonstrates promising performance in addressing this issue. However, existing ACGAN models have drawbacks such as complexity, high computational expenses, mode collapse, and vanishing gradients. Aiming to address these issues, this paper presents the Transformer and Auxiliary Classifier Generative Adversarial Network (TACGAN), which increases the diversity, complexity and entropy of generated samples, and maximizes the entropy of the generated samples. The transformer network replaces traditional convolutional neural networks (CNNs), avoiding iterative and convolutional structures, thereby reducing computational expenses. Moreover, an independent classifier is integrated to prevent the coupling problem, where the discriminator is simultaneously identified and classified in the ACGAN. Finally, the Wasserstein distance is employed in the loss function to mitigate mode collapse and vanishing gradients. Experimental results using the train wheelset bearing datasets demonstrate the accuracy and effectiveness of the TACGAN.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11675503/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142946628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Parallel Bayesian Optimization of Thermophysical Properties of Low Thermal Conductivity Materials Using the Transient Plane Source Method in the Body-Fitted Coordinate.
IF 2.1 3区 物理与天体物理 Q2 PHYSICS, MULTIDISCIPLINARY Pub Date : 2024-12-20 DOI: 10.3390/e26121117
Huijuan Su, Jianye Kang, Yan Li, Mingxin Lyu, Yanhua Lai, Zhen Dong

The transient plane source (TPS) method heat transfer model was established. A body-fitted coordinate system is proposed to transform the unstructured grid structure to improve the speed of solving the heat transfer direct problem of the winding probe. A parallel Bayesian optimization algorithm based on a multi-objective hybrid strategy (MHS) is proposed based on an inverse problem. The efficiency of the thermophysical properties inversion was improved. The results show that the meshing method of 30° is the best. The transformation of body-fitted mesh is related to the orthogonality and density of the mesh. Compared with parameter inversion the computational fluid dynamics (CFD) software, the absolute values of the relative deviations of different materials are less than 0.03%. The calculation speeds of the body-fitted grid program are more than 36% and 91% higher than those of the CFD and self-developed unstructured mesh programs, respectively. The application of body-fitted coordinate system effectively improves the calculation speed of the TPS method. The MHS is more competitive than other algorithms in parallel mode, both in terms of accuracy and speed. The accuracy of the inversion is less affected by the number of initial samples, time range, and parallel points. The number of parallel points increased from 2 to 6, reducing the computation time by 66.6%. Adding parallel points effectively accelerates the convergence of algorithms.

{"title":"Parallel Bayesian Optimization of Thermophysical Properties of Low Thermal Conductivity Materials Using the Transient Plane Source Method in the Body-Fitted Coordinate.","authors":"Huijuan Su, Jianye Kang, Yan Li, Mingxin Lyu, Yanhua Lai, Zhen Dong","doi":"10.3390/e26121117","DOIUrl":"https://doi.org/10.3390/e26121117","url":null,"abstract":"<p><p>The transient plane source (TPS) method heat transfer model was established. A body-fitted coordinate system is proposed to transform the unstructured grid structure to improve the speed of solving the heat transfer direct problem of the winding probe. A parallel Bayesian optimization algorithm based on a multi-objective hybrid strategy (MHS) is proposed based on an inverse problem. The efficiency of the thermophysical properties inversion was improved. The results show that the meshing method of 30° is the best. The transformation of body-fitted mesh is related to the orthogonality and density of the mesh. Compared with parameter inversion the computational fluid dynamics (CFD) software, the absolute values of the relative deviations of different materials are less than 0.03%. The calculation speeds of the body-fitted grid program are more than 36% and 91% higher than those of the CFD and self-developed unstructured mesh programs, respectively. The application of body-fitted coordinate system effectively improves the calculation speed of the TPS method. The MHS is more competitive than other algorithms in parallel mode, both in terms of accuracy and speed. The accuracy of the inversion is less affected by the number of initial samples, time range, and parallel points. The number of parallel points increased from 2 to 6, reducing the computation time by 66.6%. Adding parallel points effectively accelerates the convergence of algorithms.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11675404/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142946553","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A DNA Data Storage Method Using Spatial Encoding Based Lossless Compression.
IF 2.1 3区 物理与天体物理 Q2 PHYSICS, MULTIDISCIPLINARY Pub Date : 2024-12-20 DOI: 10.3390/e26121116
Esra Şatır

With the rapid increase in global data and rapid development of information technology, DNA sequences have been collected and manipulated on computers. This has yielded a new and attractive field of bioinformatics, DNA storage, where DNA has been considered as a great potential storage medium. It is known that one gram of DNA can store 215 GB of data, and the data stored in the DNA can be preserved for tens of thousands of years. In this study, a lossless and reversible DNA data storage method was proposed. The proposed approach employs a vector representation of each DNA base in a two-dimensional (2D) spatial domain for both encoding and decoding. The structure of the proposed method is reversible, rendering the decompression procedure possible. Experiments were performed to investigate the capacity, compression ratio, stability, and reliability. The obtained results show that the proposed method is much more efficient in terms of capacity than other known algorithms in the literature.

{"title":"A DNA Data Storage Method Using Spatial Encoding Based Lossless Compression.","authors":"Esra Şatır","doi":"10.3390/e26121116","DOIUrl":"https://doi.org/10.3390/e26121116","url":null,"abstract":"<p><p>With the rapid increase in global data and rapid development of information technology, DNA sequences have been collected and manipulated on computers. This has yielded a new and attractive field of bioinformatics, DNA storage, where DNA has been considered as a great potential storage medium. It is known that one gram of DNA can store 215 GB of data, and the data stored in the DNA can be preserved for tens of thousands of years. In this study, a lossless and reversible DNA data storage method was proposed. The proposed approach employs a vector representation of each DNA base in a two-dimensional (2D) spatial domain for both encoding and decoding. The structure of the proposed method is reversible, rendering the decompression procedure possible. Experiments were performed to investigate the capacity, compression ratio, stability, and reliability. The obtained results show that the proposed method is much more efficient in terms of capacity than other known algorithms in the literature.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11675758/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142946598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Contextual Fine-Tuning of Language Models with Classifier-Driven Content Moderation for Text Generation.
IF 2.1 3区 物理与天体物理 Q2 PHYSICS, MULTIDISCIPLINARY Pub Date : 2024-12-20 DOI: 10.3390/e26121114
Matan Punnaivanam, Palani Velvizhy

In today's digital age, ensuring the appropriateness of content for children is crucial for their cognitive and emotional development. The rise of automated text generation technologies, such as Large Language Models like LLaMA, Mistral, and Zephyr, has created a pressing need for effective tools to filter and classify suitable content. However, the existing methods often fail to effectively address the intricate details and unique characteristics of children's literature. This study aims to bridge this gap by developing a robust framework that utilizes fine-tuned language models, classification techniques, and contextual story generation to generate and classify children's stories based on their suitability. Employing a combination of fine-tuning techniques on models such as LLaMA, Mistral, and Zephyr, alongside a BERT-based classifier, we evaluated the generated stories against established metrics like ROUGE, METEOR, and BERT Scores. The fine-tuned Mistral-7B model achieved a ROUGE-1 score of 0.4785, significantly higher than the base model's 0.3185, while Zephyr-7B-Beta achieved a METEOR score of 0.4154 compared to its base counterpart's score of 0.3602. The results indicated that the fine-tuned models outperformed base models, generating content more aligned with human standards. Moreover, the BERT Classifier exhibited high precision (0.95) and recall (0.97) for identifying unsuitable content, further enhancing the reliability of content classification. These findings highlight the potential of advanced language models in generating age-appropriate stories and enhancing content moderation strategies. This research has broader implications for educational technology, content curation, and parental control systems, offering a scalable approach to ensuring children's exposure to safe and enriching narratives.

{"title":"Contextual Fine-Tuning of Language Models with Classifier-Driven Content Moderation for Text Generation.","authors":"Matan Punnaivanam, Palani Velvizhy","doi":"10.3390/e26121114","DOIUrl":"https://doi.org/10.3390/e26121114","url":null,"abstract":"<p><p>In today's digital age, ensuring the appropriateness of content for children is crucial for their cognitive and emotional development. The rise of automated text generation technologies, such as Large Language Models like LLaMA, Mistral, and Zephyr, has created a pressing need for effective tools to filter and classify suitable content. However, the existing methods often fail to effectively address the intricate details and unique characteristics of children's literature. This study aims to bridge this gap by developing a robust framework that utilizes fine-tuned language models, classification techniques, and contextual story generation to generate and classify children's stories based on their suitability. Employing a combination of fine-tuning techniques on models such as LLaMA, Mistral, and Zephyr, alongside a BERT-based classifier, we evaluated the generated stories against established metrics like ROUGE, METEOR, and BERT Scores. The fine-tuned Mistral-7B model achieved a ROUGE-1 score of 0.4785, significantly higher than the base model's 0.3185, while Zephyr-7B-Beta achieved a METEOR score of 0.4154 compared to its base counterpart's score of 0.3602. The results indicated that the fine-tuned models outperformed base models, generating content more aligned with human standards. Moreover, the BERT Classifier exhibited high precision (0.95) and recall (0.97) for identifying unsuitable content, further enhancing the reliability of content classification. These findings highlight the potential of advanced language models in generating age-appropriate stories and enhancing content moderation strategies. This research has broader implications for educational technology, content curation, and parental control systems, offering a scalable approach to ensuring children's exposure to safe and enriching narratives.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11675295/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142946708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Bayesian Assessment of Corrosion-Related Failures in Steel Pipelines.
IF 2.1 3区 物理与天体物理 Q2 PHYSICS, MULTIDISCIPLINARY Pub Date : 2024-12-19 DOI: 10.3390/e26121111
Fabrizio Ruggeri, Enrico Cagno, Franco Caron, Mauro Mancini, Antonio Pievatolo

The probability of gas escapes from steel pipelines due to different types of corrosion is studied with real failure data from an urban gas distribution network. Both the design and maintenance of the network are considered, identifying and estimating (in a Bayesian framework) an elementary multinomial model in the first case, and a more sophisticated non-homogeneous Poisson process in the second case. Special attention is paid to the elicitation of the experts' opinions. We conclude that the corrosion process behaves quite differently depending on the type of corrosion, and that, in most cases, cathodically protected pipes should be installed.

{"title":"Bayesian Assessment of Corrosion-Related Failures in Steel Pipelines.","authors":"Fabrizio Ruggeri, Enrico Cagno, Franco Caron, Mauro Mancini, Antonio Pievatolo","doi":"10.3390/e26121111","DOIUrl":"https://doi.org/10.3390/e26121111","url":null,"abstract":"<p><p>The probability of gas escapes from steel pipelines due to different types of corrosion is studied with real failure data from an urban gas distribution network. Both the design and maintenance of the network are considered, identifying and estimating (in a Bayesian framework) an elementary multinomial model in the first case, and a more sophisticated non-homogeneous Poisson process in the second case. Special attention is paid to the elicitation of the experts' opinions. We conclude that the corrosion process behaves quite differently depending on the type of corrosion, and that, in most cases, cathodically protected pipes should be installed.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11675791/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142946687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Network Coding-Enhanced Polar Codes for Relay-Assisted Visible Light Communication Systems.
IF 2.1 3区 物理与天体物理 Q2 PHYSICS, MULTIDISCIPLINARY Pub Date : 2024-12-19 DOI: 10.3390/e26121112
Congduan Li, Mingyang Zhong, Yiqian Zhang, Dan Song, Nanfeng Zhang, Jingfeng Yang

This paper proposes a novel polar coding scheme tailored for indoor visible light communication (VLC) systems. Simulation results demonstrate a significant reduction in bit error rate (BER) compared to uncoded transmission, with a coding gain of at least 5 dB. Furthermore, the reliable communication area of the VLC system is substantially extended. Building on this foundation, this study explores the joint design of polar codes and physical-layer network coding (PNC) for VLC systems. Simulation results illustrate that the BER of our scheme closely approaches that of the conventional VLC relay scheme. Moreover, our approach doubles the throughput, cuts equipment expenses in half, and boosts effective bit rates per unit time-slot twofold. This proposed design noticeably advances the performance of VLC systems and is particularly well-suited for scenarios with low-latency demands.

{"title":"Network Coding-Enhanced Polar Codes for Relay-Assisted Visible Light Communication Systems.","authors":"Congduan Li, Mingyang Zhong, Yiqian Zhang, Dan Song, Nanfeng Zhang, Jingfeng Yang","doi":"10.3390/e26121112","DOIUrl":"https://doi.org/10.3390/e26121112","url":null,"abstract":"<p><p>This paper proposes a novel polar coding scheme tailored for indoor visible light communication (VLC) systems. Simulation results demonstrate a significant reduction in bit error rate (BER) compared to uncoded transmission, with a coding gain of at least 5 dB. Furthermore, the reliable communication area of the VLC system is substantially extended. Building on this foundation, this study explores the joint design of polar codes and physical-layer network coding (PNC) for VLC systems. Simulation results illustrate that the BER of our scheme closely approaches that of the conventional VLC relay scheme. Moreover, our approach doubles the throughput, cuts equipment expenses in half, and boosts effective bit rates per unit time-slot twofold. This proposed design noticeably advances the performance of VLC systems and is particularly well-suited for scenarios with low-latency demands.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11675228/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142946812","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Novel Video Compression Approach Based on Two-Stage Learning.
IF 2.1 3区 物理与天体物理 Q2 PHYSICS, MULTIDISCIPLINARY Pub Date : 2024-12-19 DOI: 10.3390/e26121110
Dan Shao, Ning Wang, Pu Chen, Yu Liu, Lin Lin

In recent years, the rapid growth of video data posed challenges for storage and transmission. Video compression techniques provided a viable solution to this problem. In this study, we proposed a bidirectional coding video compression model named DeepBiVC, which was based on two-stage learning. Firstly, we conducted preprocessing on the video data by segmenting the video flow into groups of continuous image frames, with each group comprising five frames. Then, in the first stage, we developed an image compression module based on an invertible neural network (INN) model to compress the first and last frames of each group. In the second stage, we designed a video compression module that compressed the intermediate frames using bidirectional optical flow estimation. Experimental results indicated that DeepBiVC outperformed other state-of-the-art video compression methods regarding PSNR and MS-SSIM metrics. Specifically, on the VUG dataset at bpp = 0.3, DeepBiVC achieved a PSNR of 37.16 and an MS-SSIM of 0.98.

{"title":"A Novel Video Compression Approach Based on Two-Stage Learning.","authors":"Dan Shao, Ning Wang, Pu Chen, Yu Liu, Lin Lin","doi":"10.3390/e26121110","DOIUrl":"https://doi.org/10.3390/e26121110","url":null,"abstract":"<p><p>In recent years, the rapid growth of video data posed challenges for storage and transmission. Video compression techniques provided a viable solution to this problem. In this study, we proposed a bidirectional coding video compression model named DeepBiVC, which was based on two-stage learning. Firstly, we conducted preprocessing on the video data by segmenting the video flow into groups of continuous image frames, with each group comprising five frames. Then, in the first stage, we developed an image compression module based on an invertible neural network (INN) model to compress the first and last frames of each group. In the second stage, we designed a video compression module that compressed the intermediate frames using bidirectional optical flow estimation. Experimental results indicated that DeepBiVC outperformed other state-of-the-art video compression methods regarding PSNR and MS-SSIM metrics. Specifically, on the VUG dataset at bpp = 0.3, DeepBiVC achieved a PSNR of 37.16 and an MS-SSIM of 0.98.</p>","PeriodicalId":11694,"journal":{"name":"Entropy","volume":"26 12","pages":""},"PeriodicalIF":2.1,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11675258/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142946545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Entropy
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1