The continuous downscaling of silicon transistors has driven exponential improvements in computing performance and energy efficiency, but sub-10 nm channel lengths pose fundamental challenges in speed and power consumption. Emerging materials and architectures offer promising pathways for further miniaturization. Bismuth oxyselenide (Bi2O2Se), an air-stable 2D semiconductor, exhibits high mobility, a suitable bandgap and a native high-κ oxide (Bi2SeO5), resembling silicon and its SiO2 counterpart. These properties suggest compatibility with industrial processes, positioning Bi2O2Se for next-generation high-performance computing. This Review summarizes recent advances in material synthesis, wafer-scale integration and device architectures, highlighting key challenges in the lab-to-fab transition. Finally, a roadmap is proposed to guide future innovations in ultra-scaled, energy-efficient electronics. This Review explores Bi2O2Se as a promising 2D semiconductor for next-generation computing, highlighting its high mobility, suitable bandgap and native high-κ oxide, which enables wafer-scale integration and compatibility with industrial processes, while addressing key challenges in the lab-to-fab transition and proposing a roadmap for ultra-scaled, energy-efficient electronics.
{"title":"2D bismuth oxyselenide semiconductor for future electronics","authors":"Congwei Tan, Junchuan Tang, Xin Gao, Chengyuan Xue, Hailin Peng","doi":"10.1038/s44287-025-00179-1","DOIUrl":"10.1038/s44287-025-00179-1","url":null,"abstract":"The continuous downscaling of silicon transistors has driven exponential improvements in computing performance and energy efficiency, but sub-10 nm channel lengths pose fundamental challenges in speed and power consumption. Emerging materials and architectures offer promising pathways for further miniaturization. Bismuth oxyselenide (Bi2O2Se), an air-stable 2D semiconductor, exhibits high mobility, a suitable bandgap and a native high-κ oxide (Bi2SeO5), resembling silicon and its SiO2 counterpart. These properties suggest compatibility with industrial processes, positioning Bi2O2Se for next-generation high-performance computing. This Review summarizes recent advances in material synthesis, wafer-scale integration and device architectures, highlighting key challenges in the lab-to-fab transition. Finally, a roadmap is proposed to guide future innovations in ultra-scaled, energy-efficient electronics. This Review explores Bi2O2Se as a promising 2D semiconductor for next-generation computing, highlighting its high mobility, suitable bandgap and native high-κ oxide, which enables wafer-scale integration and compatibility with industrial processes, while addressing key challenges in the lab-to-fab transition and proposing a roadmap for ultra-scaled, energy-efficient electronics.","PeriodicalId":501701,"journal":{"name":"Nature Reviews Electrical Engineering","volume":"2 7","pages":"494-513"},"PeriodicalIF":0.0,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145123050","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-28DOI: 10.1038/s44287-025-00176-4
Javier Ibanez-Guzman, You Li
Autonomous vehicles rely on both LiDAR and cameras for perception, with each technology offering unique advantages — cameras provide rich contextual information, whereas LiDAR delivers precise depth data. Understanding their trade-offs is crucial for creating reliable and efficient autonomous vehicles.
{"title":"LiDAR and cameras in autonomous driving","authors":"Javier Ibanez-Guzman, You Li","doi":"10.1038/s44287-025-00176-4","DOIUrl":"10.1038/s44287-025-00176-4","url":null,"abstract":"Autonomous vehicles rely on both LiDAR and cameras for perception, with each technology offering unique advantages — cameras provide rich contextual information, whereas LiDAR delivers precise depth data. Understanding their trade-offs is crucial for creating reliable and efficient autonomous vehicles.","PeriodicalId":501701,"journal":{"name":"Nature Reviews Electrical Engineering","volume":"2 8","pages":"515-516"},"PeriodicalIF":0.0,"publicationDate":"2025-05-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145123053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-22DOI: 10.1038/s44287-025-00177-3
Matthew T. Flavin, Jose A. Foppiani, Marek A. Paul, Angelica H. Alvarez, Lacey Foster, Dominika Gavlasova, Haobo Ma, John A. Rogers, Samuel J. Lin
Pain management in humans is an unresolved problem with substantial medical, societal and economic implications. Traditional strategies such as opioid-based medications are highly effective but pose many long-term risks, including addiction and overdose. In this Review, we discuss these persistent challenges in medical care along with advances in bioelectronics that enable safer and more effective alternative treatments. Emerging approaches leverage wireless embedded networks and machine learning to accurately detect and quantify the symptoms of pain, establishing a foundation for targeted, on-demand treatment. These platforms offer a powerful complement to wearable and implantable neural interfaces that can control these symptoms with unprecedented spatiotemporal and functional selectivity. Now, emotional and cognitive aspects of pain can be addressed through immersive multisensory engagement with systems for augmented and virtual reality. Trends in diagnostic and interventional technologies show how their integration is well suited to addressing some of the most intractable problems in pain management. Pain is a profound and unresolved health challenge, and current interventions are not sufficient for safe and effective pain management. Bioelectronics presents solutions for monitoring the symptoms of pain and treating these symptoms in a targeted manner.
{"title":"Bioelectronics for targeted pain management","authors":"Matthew T. Flavin, Jose A. Foppiani, Marek A. Paul, Angelica H. Alvarez, Lacey Foster, Dominika Gavlasova, Haobo Ma, John A. Rogers, Samuel J. Lin","doi":"10.1038/s44287-025-00177-3","DOIUrl":"10.1038/s44287-025-00177-3","url":null,"abstract":"Pain management in humans is an unresolved problem with substantial medical, societal and economic implications. Traditional strategies such as opioid-based medications are highly effective but pose many long-term risks, including addiction and overdose. In this Review, we discuss these persistent challenges in medical care along with advances in bioelectronics that enable safer and more effective alternative treatments. Emerging approaches leverage wireless embedded networks and machine learning to accurately detect and quantify the symptoms of pain, establishing a foundation for targeted, on-demand treatment. These platforms offer a powerful complement to wearable and implantable neural interfaces that can control these symptoms with unprecedented spatiotemporal and functional selectivity. Now, emotional and cognitive aspects of pain can be addressed through immersive multisensory engagement with systems for augmented and virtual reality. Trends in diagnostic and interventional technologies show how their integration is well suited to addressing some of the most intractable problems in pain management. Pain is a profound and unresolved health challenge, and current interventions are not sufficient for safe and effective pain management. Bioelectronics presents solutions for monitoring the symptoms of pain and treating these symptoms in a targeted manner.","PeriodicalId":501701,"journal":{"name":"Nature Reviews Electrical Engineering","volume":"2 6","pages":"407-424"},"PeriodicalIF":0.0,"publicationDate":"2025-05-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145123182","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High-fidelity wearable bioelectronics aims to establish seamless integration between electronic devices and biological systems to enable real-time health monitoring, disease diagnosis, and multimodal interaction. Central to this integration are the bio–electronic interfaces, which require conformal alignment and optimized electrical and mechanical properties to ensure stable and accurate signal acquisition. Traditional bioelectronic devices frequently fail to achieve good conformity at the microscale and nanoscale, lacking fabrication processes feasible for customized three-dimensional (3D) microstructures and nanostructures. In this Review, we discuss advances in 3D manufacturing technologies, focusing on those techniques that, enabling the fabrication of cross-scale, multimaterial structures, address key challenges in spatial complexity and mechanical mismatch at the bio–electronic interface. These innovations promote both long-term wearability and high-fidelity signal integrity. Interdisciplinary collaboration — particularly the integration of artificial intelligence — is essential for driving successful transformation in the field. Delivering cost-effective and scalable solutions for the fabrication of high-fidelity bioelectronic devices is crucial to realize their transformative potential in healthcare, human–machine interaction, and personalized medicine. Wearable bioelectronics integrates functional electronic devices with biological systems to enable real-time health monitoring and disease diagnosis. This Review explores advancements in three-dimensional manufacturing technologies for high-fidelity biosensors, addressing challenges related to fabrication, signal integrity, and long-term wearability.
{"title":"Three-dimensional micro- and nanomanufacturing techniques for high-fidelity wearable bioelectronics","authors":"Peidi Fan, Ying Liu, Yuxiang Pan, Yibin Ying, Jianfeng Ping","doi":"10.1038/s44287-025-00174-6","DOIUrl":"10.1038/s44287-025-00174-6","url":null,"abstract":"High-fidelity wearable bioelectronics aims to establish seamless integration between electronic devices and biological systems to enable real-time health monitoring, disease diagnosis, and multimodal interaction. Central to this integration are the bio–electronic interfaces, which require conformal alignment and optimized electrical and mechanical properties to ensure stable and accurate signal acquisition. Traditional bioelectronic devices frequently fail to achieve good conformity at the microscale and nanoscale, lacking fabrication processes feasible for customized three-dimensional (3D) microstructures and nanostructures. In this Review, we discuss advances in 3D manufacturing technologies, focusing on those techniques that, enabling the fabrication of cross-scale, multimaterial structures, address key challenges in spatial complexity and mechanical mismatch at the bio–electronic interface. These innovations promote both long-term wearability and high-fidelity signal integrity. Interdisciplinary collaboration — particularly the integration of artificial intelligence — is essential for driving successful transformation in the field. Delivering cost-effective and scalable solutions for the fabrication of high-fidelity bioelectronic devices is crucial to realize their transformative potential in healthcare, human–machine interaction, and personalized medicine. Wearable bioelectronics integrates functional electronic devices with biological systems to enable real-time health monitoring and disease diagnosis. This Review explores advancements in three-dimensional manufacturing technologies for high-fidelity biosensors, addressing challenges related to fabrication, signal integrity, and long-term wearability.","PeriodicalId":501701,"journal":{"name":"Nature Reviews Electrical Engineering","volume":"2 6","pages":"390-406"},"PeriodicalIF":0.0,"publicationDate":"2025-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145123175","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-12DOI: 10.1038/s44287-025-00170-w
Wenhao Xue (, ), Yi Ren (, ), Yi Tang (, ), Ziqi Gong (, ), Tianfang Zhang (, ), Zuobing Chen (, ), Xiaonan Dong (, ), Xuezhi Ma (, ), Ziyu Wang (, ), Heng Xu (, ), Jiaqing Zhao (, ), Yuan Ma (, )
Digital information has permeated all aspects of life, and diverse forms of information exert a profound influence on social interactions and cognitive perceptions. In contrast to the flourishing of digital interaction devices for sighted users, the needs of blind and partially sighted users for digital interaction devices have not been adequately addressed. Current assistive devices often cause frustration in blind and partially sighted users owing to the limited efficiency and reliability of information delivery and the high cognitive load associated with their use. The expected rise in the prevalence of blindness and visual impairment due to global population ageing drives an urgent need for assistive devices that can deliver information effectively and non-visually, and thereby overcome the challenges faced by this community. This Perspective presents three potential directions in assistive device design: multisensory learning and integration; gestural interaction control; and the synchronization of tactile feedback with large-scale visual language models. Future trends in assistive devices for use by blind and partially sighted people are also explored, focusing on metrics for text delivery efficiency and the enhancement of image content delivery. Such devices promise to greatly enrich the lives of blind and partially sighted individuals in the digital age. Blind and partially sighted individuals face considerable challenges when interacting with digital information. This Perspective highlights important bottlenecks in information accessibility, design considerations for future wearable assistive devices and the outlook for improved access to and delivery of information in text and images.
{"title":"Interactive wearable digital devices for blind and partially sighted people","authors":"Wenhao Xue \u0000 (, ), Yi Ren \u0000 (, ), Yi Tang \u0000 (, ), Ziqi Gong \u0000 (, ), Tianfang Zhang \u0000 (, ), Zuobing Chen \u0000 (, ), Xiaonan Dong \u0000 (, ), Xuezhi Ma \u0000 (, ), Ziyu Wang \u0000 (, ), Heng Xu \u0000 (, ), Jiaqing Zhao \u0000 (, ), Yuan Ma \u0000 (, )","doi":"10.1038/s44287-025-00170-w","DOIUrl":"10.1038/s44287-025-00170-w","url":null,"abstract":"Digital information has permeated all aspects of life, and diverse forms of information exert a profound influence on social interactions and cognitive perceptions. In contrast to the flourishing of digital interaction devices for sighted users, the needs of blind and partially sighted users for digital interaction devices have not been adequately addressed. Current assistive devices often cause frustration in blind and partially sighted users owing to the limited efficiency and reliability of information delivery and the high cognitive load associated with their use. The expected rise in the prevalence of blindness and visual impairment due to global population ageing drives an urgent need for assistive devices that can deliver information effectively and non-visually, and thereby overcome the challenges faced by this community. This Perspective presents three potential directions in assistive device design: multisensory learning and integration; gestural interaction control; and the synchronization of tactile feedback with large-scale visual language models. Future trends in assistive devices for use by blind and partially sighted people are also explored, focusing on metrics for text delivery efficiency and the enhancement of image content delivery. Such devices promise to greatly enrich the lives of blind and partially sighted individuals in the digital age. Blind and partially sighted individuals face considerable challenges when interacting with digital information. This Perspective highlights important bottlenecks in information accessibility, design considerations for future wearable assistive devices and the outlook for improved access to and delivery of information in text and images.","PeriodicalId":501701,"journal":{"name":"Nature Reviews Electrical Engineering","volume":"2 6","pages":"425-439"},"PeriodicalIF":0.0,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145123183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-12DOI: 10.1038/s44287-025-00166-6
Zhe Min (, ), Jiewen Lai (, ), Hongliang Ren (, )
The rapid development of generative artificial intelligence and large models, including large vision models (LVMs), has accelerated their wide applications in medicine. Robot-assisted surgery (RAS) or surgical robotics, in which vision has a vital role, typically combines medical images for diagnostic or navigation abilities with robots with precise operative capabilities. In this context, LVMs could serve as a revolutionary paradigm towards surgical autonomy, accomplishing surgical representations with high fidelity and physical intelligence and enabling high-quality data use and long-term learning. In this Perspective, vision-related tasks in RAS are divided into fundamental upstream tasks and advanced downstream counterparts, elucidating their shared technical foundations with state-of-the-art research that could catalyse a paradigm shift in surgical robotics research for the next decade. LVMs have already been extensively explored to tackle upstream tasks in RAS, exhibiting promising performances. Developing vision foundation models for downstream RAS tasks, which is based on upstream counterparts but necessitates further investigations, will directly enhance surgical autonomy. Here, we outline research trends that could accelerate this paradigm shift and highlight major challenges that could impede progress in the way to the ultimate transformation from ‘surgical robots’ to ‘robotic surgeons’. Robot-assisted surgery relies heavily on vision and generally integrates medical imaging for diagnostic and/or navigation purposes with robots that offer accurate surgical functions. This Perspective discusses how large vision models can enhance vision-related tasks in robot-assisted surgery transforming ‘surgical robots’ into ‘robotic surgeons’.
{"title":"Innovating robot-assisted surgery through large vision models","authors":"Zhe Min \u0000 (, ), Jiewen Lai \u0000 (, ), Hongliang Ren \u0000 (, )","doi":"10.1038/s44287-025-00166-6","DOIUrl":"10.1038/s44287-025-00166-6","url":null,"abstract":"The rapid development of generative artificial intelligence and large models, including large vision models (LVMs), has accelerated their wide applications in medicine. Robot-assisted surgery (RAS) or surgical robotics, in which vision has a vital role, typically combines medical images for diagnostic or navigation abilities with robots with precise operative capabilities. In this context, LVMs could serve as a revolutionary paradigm towards surgical autonomy, accomplishing surgical representations with high fidelity and physical intelligence and enabling high-quality data use and long-term learning. In this Perspective, vision-related tasks in RAS are divided into fundamental upstream tasks and advanced downstream counterparts, elucidating their shared technical foundations with state-of-the-art research that could catalyse a paradigm shift in surgical robotics research for the next decade. LVMs have already been extensively explored to tackle upstream tasks in RAS, exhibiting promising performances. Developing vision foundation models for downstream RAS tasks, which is based on upstream counterparts but necessitates further investigations, will directly enhance surgical autonomy. Here, we outline research trends that could accelerate this paradigm shift and highlight major challenges that could impede progress in the way to the ultimate transformation from ‘surgical robots’ to ‘robotic surgeons’. Robot-assisted surgery relies heavily on vision and generally integrates medical imaging for diagnostic and/or navigation purposes with robots that offer accurate surgical functions. This Perspective discusses how large vision models can enhance vision-related tasks in robot-assisted surgery transforming ‘surgical robots’ into ‘robotic surgeons’.","PeriodicalId":501701,"journal":{"name":"Nature Reviews Electrical Engineering","volume":"2 5","pages":"350-363"},"PeriodicalIF":0.0,"publicationDate":"2025-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145123147","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-09DOI: 10.1038/s44287-025-00180-8
Miranda L. Vinay
An article in IEEE Transactions on Mobile Computing presents Wi-Fi-based virtual reality (VR) motion tracking technology with high resolution and millisecond processing.
{"title":"Leveraging Wi-Fi networks for better motion detection","authors":"Miranda L. Vinay","doi":"10.1038/s44287-025-00180-8","DOIUrl":"10.1038/s44287-025-00180-8","url":null,"abstract":"An article in IEEE Transactions on Mobile Computing presents Wi-Fi-based virtual reality (VR) motion tracking technology with high resolution and millisecond processing.","PeriodicalId":501701,"journal":{"name":"Nature Reviews Electrical Engineering","volume":"2 5","pages":"296-296"},"PeriodicalIF":0.0,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145123148","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-09DOI: 10.1038/s44287-025-00173-7
Arnaud Verdant, Pierre L. Joly
Optical wavefront shaping compensates distortions caused by scattering, aberrations or inhomogeneities in optical medium, enabling precise phase control to enhance light penetration in turbid environments. The integrated phase measurement sensor combines light sensing and modulation at pixel level within a single device, thereby reducing alignment constraints and bandwidth limitations.
{"title":"Unlocking wavefront control potential with stacked technologies that jointly sense and shape light at pixel level","authors":"Arnaud Verdant, Pierre L. Joly","doi":"10.1038/s44287-025-00173-7","DOIUrl":"10.1038/s44287-025-00173-7","url":null,"abstract":"Optical wavefront shaping compensates distortions caused by scattering, aberrations or inhomogeneities in optical medium, enabling precise phase control to enhance light penetration in turbid environments. The integrated phase measurement sensor combines light sensing and modulation at pixel level within a single device, thereby reducing alignment constraints and bandwidth limitations.","PeriodicalId":501701,"journal":{"name":"Nature Reviews Electrical Engineering","volume":"2 9","pages":"586-587"},"PeriodicalIF":0.0,"publicationDate":"2025-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145123075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-05-08DOI: 10.1038/s44287-025-00171-9
Kiana Aran, Jiawen Li, Amanda Randles, Yating Wan
Technology research is the driving force of the innovations that shape the world. Sony Group Corporation (Sony) and Nature partnered together to launch the Sony Women in Technology Award to recognize three outstanding early to mid-career researchers from the field of technology. Here, we interviewed the winners of the inaugural 2024 award on the inspirations behind their outstanding research.
技术研究是塑造世界的创新的驱动力。索尼集团公司(Sony Group Corporation)和《自然》杂志(Nature)合作推出了索尼科技女性奖(Sony Women in Technology Award),以表彰来自科技领域的三位杰出的职业生涯早期到中期的研究人员。在这里,我们采访了首届2024年奖的获奖者,了解他们杰出研究背后的灵感。
{"title":"Meet the winners of the 2024 Sony Women in Technology Award","authors":"Kiana Aran, Jiawen Li, Amanda Randles, Yating Wan","doi":"10.1038/s44287-025-00171-9","DOIUrl":"10.1038/s44287-025-00171-9","url":null,"abstract":"Technology research is the driving force of the innovations that shape the world. Sony Group Corporation (Sony) and Nature partnered together to launch the Sony Women in Technology Award to recognize three outstanding early to mid-career researchers from the field of technology. Here, we interviewed the winners of the inaugural 2024 award on the inspirations behind their outstanding research.","PeriodicalId":501701,"journal":{"name":"Nature Reviews Electrical Engineering","volume":"2 5","pages":"297-301"},"PeriodicalIF":0.0,"publicationDate":"2025-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145123173","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-04-29DOI: 10.1038/s44287-025-00172-8
Silvia Conti
An article in Nature Electronics demonstrates how electrodermal activity can serve as a proxy for sweat rate to monitor both physical and mental activities.
《自然电子》杂志上的一篇文章展示了皮肤电活动如何作为出汗率的代表来监测身体和精神活动。
{"title":"Electrodermal activity sensors for monitoring mental and physical activity","authors":"Silvia Conti","doi":"10.1038/s44287-025-00172-8","DOIUrl":"10.1038/s44287-025-00172-8","url":null,"abstract":"An article in Nature Electronics demonstrates how electrodermal activity can serve as a proxy for sweat rate to monitor both physical and mental activities.","PeriodicalId":501701,"journal":{"name":"Nature Reviews Electrical Engineering","volume":"2 5","pages":"295-295"},"PeriodicalIF":0.0,"publicationDate":"2025-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145123171","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}