Pub Date : 2025-12-30DOI: 10.1109/JSEN.2025.3646617
Zeyang Li;Chen Chen;Jie Zhang;Claudio R. C. M. da Silva;Okan Yurduseven;Trung Q. Duong;Simon L. Cotton
A new standard, IEEE 802.11bf, has been created to offer Wi-Fi sensing capabilities across sub-7 GHz and millimeter-wave bands. Applications of Wi-Fi sensing rely on channel state information (CSI) to enable various applications such as motion detection, activity recognition, and gesture recognition. Within this context, this article investigates part of the 6-GHz spectrum for use in Wi-Fi sensing, with the aim of recognizing different types of line-of-sight (LOS) perturbation. To achieve this, a novel feature extraction methodology is presented, along with innovative features designed to comprehensively capture information from CSI. More precisely, a novel random forest (RF)-based algorithm is introduced that automatically selects optimal features and constructs accurate decision trees for the classification of various human interactions with the LOS link between two Wi-Fi devices. The proposed feature extraction and selection methodology leverages variations in the channel, which are manifested by the changes in the characteristics of signal propagation caused by movements in proximity of the LOS link. Using statistical channel metrics, which can be directly linked to the physical channel, enhances the efficiency and accuracy of LOS perturbation classification. A detailed set of experiments is used to demonstrate the accuracy of our approach, which we call channel model-based features-RF (CMF-RF). CMF-RF has been shown to outperform existing methods when used to classify human interactions with the LOS link.
{"title":"Sensing Line-of-Sight Perturbations in 6-GHz Wi-Fi Using Channel Model-Based Features","authors":"Zeyang Li;Chen Chen;Jie Zhang;Claudio R. C. M. da Silva;Okan Yurduseven;Trung Q. Duong;Simon L. Cotton","doi":"10.1109/JSEN.2025.3646617","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3646617","url":null,"abstract":"A new standard, IEEE 802.11bf, has been created to offer Wi-Fi sensing capabilities across sub-7 GHz and millimeter-wave bands. Applications of Wi-Fi sensing rely on channel state information (CSI) to enable various applications such as motion detection, activity recognition, and gesture recognition. Within this context, this article investigates part of the 6-GHz spectrum for use in Wi-Fi sensing, with the aim of recognizing different types of line-of-sight (LOS) perturbation. To achieve this, a novel feature extraction methodology is presented, along with innovative features designed to comprehensively capture information from CSI. More precisely, a novel random forest (RF)-based algorithm is introduced that automatically selects optimal features and constructs accurate decision trees for the classification of various human interactions with the LOS link between two Wi-Fi devices. The proposed feature extraction and selection methodology leverages variations in the channel, which are manifested by the changes in the characteristics of signal propagation caused by movements in proximity of the LOS link. Using statistical channel metrics, which can be directly linked to the physical channel, enhances the efficiency and accuracy of LOS perturbation classification. A detailed set of experiments is used to demonstrate the accuracy of our approach, which we call channel model-based features-RF (CMF-RF). CMF-RF has been shown to outperform existing methods when used to classify human interactions with the LOS link.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"26 3","pages":"5181-5194"},"PeriodicalIF":4.3,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082135","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1109/JSEN.2025.3645357
Satish Kumar Satti;M. Prasad
Air writing is a cutting-edge method of contactless human–machine interaction. It involves writing characters or words in the air with fingertip gestures. This method replaces keyboards and touchscreens, making it particularly useful for smart devices, healthcare applications, and handsfree text input. Predicting a single character in air writing is simple. However, detecting and classifying multiple or overlapping characters remains difficult. To address this issue, we proposed a vision-sensor-based approach that includes a Hand Tracking Algorithm and a ResYOLO-Transformer model. We also use the chaotic honey badge algorithm to optimize hyperparameters. This ensures an ideal balance across parameters. It helps avoid local optima and enhances the exploration-exploitation balance, improving prediction accuracy. A custom dataset with 26 classes was created. We used specific hand gestures to ensure that each character’s coordinates were recorded separately, even if they overlapped. The proposed model was trained and evaluated on custom and ISI datasets. It achieved an accuracy of 97.49%, demonstrating its effectiveness in robust air-written character detection and classification. Compared to other cutting-edge models such as YOLOV5, YOLOV7, YOLOV9, YOLOV11, and vision transformer (ViT), the proposed ResYOLO-Transformer model performs better. Furthermore, when integrated with the chaotic honey badger algorithm (CHBA), the proposed model outperformed other optimization techniques like CSO, PSO, BSO, and CJAYA. It achieved an improved prediction accuracy of 98.89%.
{"title":"Air-Written Multicharacter Detection and Classification Using Vision-Based Hand Gestures and an Optimized ResYOLO-Transformer","authors":"Satish Kumar Satti;M. Prasad","doi":"10.1109/JSEN.2025.3645357","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3645357","url":null,"abstract":"Air writing is a cutting-edge method of contactless human–machine interaction. It involves writing characters or words in the air with fingertip gestures. This method replaces keyboards and touchscreens, making it particularly useful for smart devices, healthcare applications, and handsfree text input. Predicting a single character in air writing is simple. However, detecting and classifying multiple or overlapping characters remains difficult. To address this issue, we proposed a vision-sensor-based approach that includes a Hand Tracking Algorithm and a ResYOLO-Transformer model. We also use the chaotic honey badge algorithm to optimize hyperparameters. This ensures an ideal balance across parameters. It helps avoid local optima and enhances the exploration-exploitation balance, improving prediction accuracy. A custom dataset with 26 classes was created. We used specific hand gestures to ensure that each character’s coordinates were recorded separately, even if they overlapped. The proposed model was trained and evaluated on custom and ISI datasets. It achieved an accuracy of 97.49%, demonstrating its effectiveness in robust air-written character detection and classification. Compared to other cutting-edge models such as YOLOV5, YOLOV7, YOLOV9, YOLOV11, and vision transformer (ViT), the proposed ResYOLO-Transformer model performs better. Furthermore, when integrated with the chaotic honey badger algorithm (CHBA), the proposed model outperformed other optimization techniques like CSO, PSO, BSO, and CJAYA. It achieved an improved prediction accuracy of 98.89%.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"26 3","pages":"5229-5240"},"PeriodicalIF":4.3,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082051","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-30DOI: 10.1109/TPS.2025.3644784
Hamza Ben Krid;Hamza Wertani;Aymen Hlali;Hassen Zairi
This work presents a hybrid copper–graphene terahertz (THz) sensor for multigas detection with tunable performance. The design achieves frequency reconfiguration from 5.235 THz for $mu _{c} = 0$ eV to 5.265 THz for $mu _{c} = 0.5$ eV, confirming the strong plasmonic control of graphene. Sensitivity analysis shows values of 405.4 GHz/RIU for CH4, 816.3 GHz/RIU for CO2, 847.5 GHz/RIU for H2O, and 606.1 GHz/RIU for NH3. To further enhance prediction accuracy, an eXtreme Gradient Boosting (XGBoost) regression model was employed, achieving $R^{2} = 0.998$ . After optimization, the sensitivities were improved to 603.0, 960.7, 1003.2, and 604.5 GHz/RIU, respectively. The proposed approach highlights the dominant role of graphene chemical potential in resonance tuning and sensitivity enhancement, establishing a compact and selective platform for advanced THz gas sensing.
{"title":"Optimization of a Hybrid Graphene–Copper Terahertz Gas Sensor Using Machine Learning","authors":"Hamza Ben Krid;Hamza Wertani;Aymen Hlali;Hassen Zairi","doi":"10.1109/TPS.2025.3644784","DOIUrl":"https://doi.org/10.1109/TPS.2025.3644784","url":null,"abstract":"This work presents a hybrid copper–graphene terahertz (THz) sensor for multigas detection with tunable performance. The design achieves frequency reconfiguration from 5.235 THz for <inline-formula> <tex-math>$mu _{c} = 0$ </tex-math></inline-formula> eV to 5.265 THz for <inline-formula> <tex-math>$mu _{c} = 0.5$ </tex-math></inline-formula> eV, confirming the strong plasmonic control of graphene. Sensitivity analysis shows values of 405.4 GHz/RIU for CH<sub>4</sub>, 816.3 GHz/RIU for CO<sub>2</sub>, 847.5 GHz/RIU for H<sub>2</sub>O, and 606.1 GHz/RIU for NH<sub>3</sub>. To further enhance prediction accuracy, an eXtreme Gradient Boosting (XGBoost) regression model was employed, achieving <inline-formula> <tex-math>$R^{2} = 0.998$ </tex-math></inline-formula>. After optimization, the sensitivities were improved to 603.0, 960.7, 1003.2, and 604.5 GHz/RIU, respectively. The proposed approach highlights the dominant role of graphene chemical potential in resonance tuning and sensitivity enhancement, establishing a compact and selective platform for advanced THz gas sensing.","PeriodicalId":450,"journal":{"name":"IEEE Transactions on Plasma Science","volume":"54 2","pages":"766-773"},"PeriodicalIF":1.5,"publicationDate":"2025-12-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anti-counterfeiting technology plays a crucial role in ensuring food safety. To address the vulnerability of traditional visual labels to forgery, this study proposes a magnetic anti-counterfeiting label recognition system based on a magnetic image sensor. Magnetic dipole theory guides the design of covert magnetic labels, and the imaging mechanism of the sensor informs the development of a handheld detection device. By embedding the label into food packaging, the system establishes an invisible anti-counterfeiting feature. It captures the magnetic image of labels through a magnetic image sensor and performs real-time authentication using a lightweight recognition algorithm on an embedded microcontroller. The results are wirelessly transmitted to a smartphone for user verification. Experimental evaluations confirm excellent imaging consistency and signal stability. The system remains robust under electromagnetic interference and temperature variations, achieving an identification accuracy exceeding 99.9% in the conducted experiments on packaged grain and oil products. Owing to its strong concealment, environmental adaptability, and resistance to duplication, the system offers a practical and efficient anti-counterfeiting solution with significant potential for real-world deployment.
{"title":"A Magnetic Image Recognition System for Anti-Counterfeiting in Grain and Oil Food Packaging","authors":"Xuyan Zhao;Qiao Wang;Xinyi Wei;Qunfeng Niu;Kun Xu;Chenglong Xing;Haofu Zhang;Changtong Zhao;Li Wang;Yuan Zhang","doi":"10.1109/JSEN.2025.3645890","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3645890","url":null,"abstract":"Anti-counterfeiting technology plays a crucial role in ensuring food safety. To address the vulnerability of traditional visual labels to forgery, this study proposes a magnetic anti-counterfeiting label recognition system based on a magnetic image sensor. Magnetic dipole theory guides the design of covert magnetic labels, and the imaging mechanism of the sensor informs the development of a handheld detection device. By embedding the label into food packaging, the system establishes an invisible anti-counterfeiting feature. It captures the magnetic image of labels through a magnetic image sensor and performs real-time authentication using a lightweight recognition algorithm on an embedded microcontroller. The results are wirelessly transmitted to a smartphone for user verification. Experimental evaluations confirm excellent imaging consistency and signal stability. The system remains robust under electromagnetic interference and temperature variations, achieving an identification accuracy exceeding 99.9% in the conducted experiments on packaged grain and oil products. Owing to its strong concealment, environmental adaptability, and resistance to duplication, the system offers a practical and efficient anti-counterfeiting solution with significant potential for real-world deployment.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"26 5","pages":"7659-7669"},"PeriodicalIF":4.3,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147299644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26DOI: 10.1109/TNANO.2025.3648734
L. Hemanth Krishna;B. Srinivasu;K. Sridharan
In this paper, we present efficient designs of approximate ternary multipliers applicable to several emerging nanodevices. The proposed multipliers are motivated by the multiply-and-accumulate (MAC) operation in convolutional neural networks (CNNs). In particular, CNN applications in imaging are resilient to errors and it is therefore advantageous to examine methods that save energy and reduce the delay. Two approximate single-digit ternary multipliers are proposed. The single-digit approximate multipliers are used to develop an approximate $3 times 3$ and $6 times 6$ ternary multipliers. The proposed approximate $6 times 6$ multiplier saves energy in the range of 22% to 40% over recent approximate designs. Further, there is a reduction of delay of roughly 21$%$ with the proposed multipliers over the best existing design. The multipliers are based on their exact counterparts which are, in turn, developed using an efficient exact ternary carry adder (TCAD) that generates the sum of two carry outputs of a single ternary digit multiplier. The application of the approximate multipliers to CNN-based imaging is then demonstrated. In particular, the proposed approximate multipliers have excellent performance for CNN-based image denoising. Further, the approximate multipliers show good performance on MNIST and CIFAR-10 datasets. Simulations for Carbon Nanotube FET (CNTFET) reveal energy savings in excess of 50% over the best existing multipliers.
{"title":"Efficient Approximate Ternary Multipliers for Emerging Nanodevices","authors":"L. Hemanth Krishna;B. Srinivasu;K. Sridharan","doi":"10.1109/TNANO.2025.3648734","DOIUrl":"https://doi.org/10.1109/TNANO.2025.3648734","url":null,"abstract":"In this paper, we present efficient designs of <italic>approximate ternary multipliers</i> applicable to several emerging nanodevices. The proposed multipliers are motivated by the multiply-and-accumulate (MAC) operation in convolutional neural networks (CNNs). In particular, CNN applications in imaging are resilient to errors and it is therefore advantageous to examine methods that save energy and reduce the delay. Two <italic>approximate single-digit ternary multipliers</i> are proposed. The single-digit approximate multipliers are used to develop an approximate <inline-formula><tex-math>$3 times 3$</tex-math></inline-formula> and <inline-formula><tex-math>$6 times 6$</tex-math></inline-formula> ternary multipliers. The proposed approximate <inline-formula><tex-math>$6 times 6$</tex-math></inline-formula> multiplier saves energy in the range of 22% to 40% over recent approximate designs. Further, there is a reduction of delay of roughly 21<inline-formula><tex-math>$%$</tex-math></inline-formula> with the proposed multipliers over the best existing design. The multipliers are based on their <italic>exact</i> counterparts which are, in turn, developed using an efficient exact <italic>ternary carry adder (TCAD)</i> that generates the sum of two carry outputs of a single ternary digit multiplier. The application of the approximate multipliers to CNN-based imaging is then demonstrated. In particular, the proposed approximate multipliers have excellent performance for CNN-based image denoising. Further, the approximate multipliers show good performance on MNIST and CIFAR-10 datasets. Simulations for Carbon Nanotube FET (CNTFET) reveal energy savings in excess of 50% over the best existing multipliers.","PeriodicalId":449,"journal":{"name":"IEEE Transactions on Nanotechnology","volume":"25 ","pages":"1-12"},"PeriodicalIF":2.1,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1109/TSTE.2025.3640786
{"title":"IEEE Transactions on Sustainable Energy Information for Authors","authors":"","doi":"10.1109/TSTE.2025.3640786","DOIUrl":"https://doi.org/10.1109/TSTE.2025.3640786","url":null,"abstract":"","PeriodicalId":452,"journal":{"name":"IEEE Transactions on Sustainable Energy","volume":"17 1","pages":"C4-C4"},"PeriodicalIF":10.0,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11313738","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145808630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, the widespread adoption of drones, while offering convenience, has also led to significant security challenges such as illegal intrusions and privacy violations, creating an urgent need for reliable identification and classification systems. A primary obstacle to achieving this reliability is the high similarity of radio frequency (RF) signals among different drone models, which often leads to misclassification. In this study, we propose the DS-UAVNet, a network that employs a dual-branch architecture to independently process complementary information from the time and frequency domains, thereby preventing information loss. Within this network, a designed parallel convolution module efficiently extracts multiscale features while reducing model complexity. To address the inherent vulnerabilities of the single-modality drone identification system, we further design M-DS-UAVNet, a multimodal framework that enhances identification robustness by leveraging a transfer learning strategy to fuse audio and RF features. Evaluations show that DS-UAVNet achieves accuracies of 98.74% and 98.56% on the public DroneRF dataset for drone classification and operation mode recognition, respectively, outperforming existing methods. Moreover, the M-DS-UAVNet framework achieves 100.00% and 99.78% accuracy on the constructed multimodal dataset, validating the effectiveness of the multimodal fusion strategy for building identification systems.
{"title":"An Efficient Dual-Branch Network and Multimodal Fusion Framework for Drone Identification","authors":"Borong Fu;Yan Zhang;Jiaming Wu;Feiyang Ye;Wancheng Zhang","doi":"10.1109/JSEN.2025.3645409","DOIUrl":"https://doi.org/10.1109/JSEN.2025.3645409","url":null,"abstract":"In recent years, the widespread adoption of drones, while offering convenience, has also led to significant security challenges such as illegal intrusions and privacy violations, creating an urgent need for reliable identification and classification systems. A primary obstacle to achieving this reliability is the high similarity of radio frequency (RF) signals among different drone models, which often leads to misclassification. In this study, we propose the DS-UAVNet, a network that employs a dual-branch architecture to independently process complementary information from the time and frequency domains, thereby preventing information loss. Within this network, a designed parallel convolution module efficiently extracts multiscale features while reducing model complexity. To address the inherent vulnerabilities of the single-modality drone identification system, we further design M-DS-UAVNet, a multimodal framework that enhances identification robustness by leveraging a transfer learning strategy to fuse audio and RF features. Evaluations show that DS-UAVNet achieves accuracies of 98.74% and 98.56% on the public DroneRF dataset for drone classification and operation mode recognition, respectively, outperforming existing methods. Moreover, the M-DS-UAVNet framework achieves 100.00% and 99.78% accuracy on the constructed multimodal dataset, validating the effectiveness of the multimodal fusion strategy for building identification systems.","PeriodicalId":447,"journal":{"name":"IEEE Sensors Journal","volume":"26 3","pages":"5241-5253"},"PeriodicalIF":4.3,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146082011","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1109/JPHOTOV.2025.3642887
Jing-Wu Dong;Jyun-Guei Huang;Yu-Min Lin;Kuan-Wei Lee;Yu-Qian Ye;Che-Yu Lin
This study establishes an integrated approach to quantify partial shading losses in commercial half-cut photovoltaic modules. Systematic indoor experiments were conducted on a 440 W c-Si half-cut module, providing current-voltage data under controlled shading. The data were used to calibrate a detailed LTspice circuit model. Further, an analytical loss function was developed to predict power losses as a function of shaded substring fraction and configuration. This analytical loss function was then refined through empirical fitting to the experimentally validated LTspice model, and it closely matches both the designed simulation conditions used for data fitting and independent representative shading scenarios. This framework offers a reliable and efficient tool for predicting shading losses in series-connected half-cut PV modules, facilitating more accurate system design and performance assessment.
本研究建立了一种综合方法来量化商业半切光伏组件的部分遮阳损失。在440 W c-Si半切模块上进行了系统的室内实验,提供了受控遮光下的电流-电压数据。这些数据被用来校准详细的LTspice电路模型。此外,还开发了一个分析损失函数来预测功率损失,作为阴影子串分数和配置的函数。然后,通过经验拟合对实验验证的LTspice模型进行改进,该分析损失函数与用于数据拟合的设计模拟条件和独立的代表性遮阳情景密切匹配。该框架为预测串联半切光伏模块的遮阳损失提供了可靠和有效的工具,有助于更准确的系统设计和性能评估。
{"title":"Partial Shading Losses in Half-Cut PV Modules: Experiments, Circuit Simulation, and an Analytical Loss Function","authors":"Jing-Wu Dong;Jyun-Guei Huang;Yu-Min Lin;Kuan-Wei Lee;Yu-Qian Ye;Che-Yu Lin","doi":"10.1109/JPHOTOV.2025.3642887","DOIUrl":"https://doi.org/10.1109/JPHOTOV.2025.3642887","url":null,"abstract":"This study establishes an integrated approach to quantify partial shading losses in commercial half-cut photovoltaic modules. Systematic indoor experiments were conducted on a 440 W c-Si half-cut module, providing current-voltage data under controlled shading. The data were used to calibrate a detailed LTspice circuit model. Further, an analytical loss function was developed to predict power losses as a function of shaded substring fraction and configuration. This analytical loss function was then refined through empirical fitting to the experimentally validated LTspice model, and it closely matches both the designed simulation conditions used for data fitting and independent representative shading scenarios. This framework offers a reliable and efficient tool for predicting shading losses in series-connected half-cut PV modules, facilitating more accurate system design and performance assessment.","PeriodicalId":445,"journal":{"name":"IEEE Journal of Photovoltaics","volume":"16 2","pages":"242-249"},"PeriodicalIF":2.6,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146223817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-23DOI: 10.1109/TSTE.2025.3640784
{"title":"IEEE Industry Applications Society Information","authors":"","doi":"10.1109/TSTE.2025.3640784","DOIUrl":"https://doi.org/10.1109/TSTE.2025.3640784","url":null,"abstract":"","PeriodicalId":452,"journal":{"name":"IEEE Transactions on Sustainable Energy","volume":"17 1","pages":"C3-C3"},"PeriodicalIF":10.0,"publicationDate":"2025-12-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11313739","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145808620","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}