Pub Date : 2025-11-13eCollection Date: 2025-12-01DOI: 10.1016/j.mex.2025.103723
Dongdong Zhu, Su Mei Liu, Aude Leynaert, Paul Tréguer, Morgane Gallinari, Heting Zhou, Jill N Sutton
This study describes a wet chemical extraction protocol for measuring the biogenic silica (bSi) in sediments from diverse marine environments. The protocol lists the reagents, materials, equipment, and sample preparation procedures, and provides a detailed explanation of the methods for examining the alkaline-leachable silicon (Si), and calculating bSi content. Although, the protocol was primarily developed for measuring bSi in sediments from the Chinese marginal seas, it was also validated using sediments from the Chesapeake Bay, the Atlantic Ocean, and the Southern Ocean. The protocol can be used to quantify bSi in recently deposited and aged sediments from the Holocene period. The protocol contributes to the ongoing efforts to minimize the methodological bias that exist in bSi quantification and the bSi burial flux evaluation, thereby assisting in our understanding of Si cycling in the modern ocean. • This protocol provides a step-by-step wet chemical extraction procedures and the measurement of dissolved Si in an alkaline solution using spectrophotometer. • This protocol is easy to set up and reproduce, and determines bSi content with high precision. • The protocol can be used to determine bSi in sediments of marginal seas and the open ocean.
{"title":"A wet chemical extraction protocol for measuring biogenic silica in sediments of marginal seas and open ocean.","authors":"Dongdong Zhu, Su Mei Liu, Aude Leynaert, Paul Tréguer, Morgane Gallinari, Heting Zhou, Jill N Sutton","doi":"10.1016/j.mex.2025.103723","DOIUrl":"10.1016/j.mex.2025.103723","url":null,"abstract":"<p><p>This study describes a wet chemical extraction protocol for measuring the biogenic silica (bSi) in sediments from diverse marine environments. The protocol lists the reagents, materials, equipment, and sample preparation procedures, and provides a detailed explanation of the methods for examining the alkaline-leachable silicon (Si), and calculating bSi content. Although, the protocol was primarily developed for measuring bSi in sediments from the Chinese marginal seas, it was also validated using sediments from the Chesapeake Bay, the Atlantic Ocean, and the Southern Ocean. The protocol can be used to quantify bSi in recently deposited and aged sediments from the Holocene period. The protocol contributes to the ongoing efforts to minimize the methodological bias that exist in bSi quantification and the bSi burial flux evaluation, thereby assisting in our understanding of Si cycling in the modern ocean. • This protocol provides a step-by-step wet chemical extraction procedures and the measurement of dissolved Si in an alkaline solution using spectrophotometer. • This protocol is easy to set up and reproduce, and determines bSi content with high precision. • The protocol can be used to determine bSi in sediments of marginal seas and the open ocean.</p>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"103723"},"PeriodicalIF":1.9,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12670883/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145668840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Radiographs are essential in clinical dentistry as they provide information that is invisible during an oral inspection. These images, however suffer from excess noise, low resolution, and poor contrast which impacts diagnosis accuracy. This study presents a two‑stage pipeline combining Enhanced Super‑Resolution GAN (ESRGAN) for radiograph enhancement followed by YOLOv8 for multi‑class dental anomaly detection. The proposed method involves the application of ESRGAN (Enhanced super-resolution generative adversarial network with adaptive dual perceptual loss) that improves image detail and sharpen resolution. A customized YOLOv8 object detection model which is trained to detect object and classify into six important dental conditions. The classification is Caries, Crown, Root Canal Treated (RCT), Restoration, Normal, and Badly Decayed teeth. The ESRGAN-enhanced images demonstrated high visual fidelity, achieving a Peak Signal-to-Noise Ratio (PSNR) of 28.7 dB and a Structural Similarity Index (SSIM) of 0.91. The proposed YOLOv8 model analyzes the images after being enhanced by ESRGAN. The YOLOv8 model was evaluated on 100 test images and achieved an overall mean Average Precision (mAP@0.5) of 56.9 % and mAP@0.5:0.95 of 41.6 %. The proposed model achieved Sensitivity (Recall) 0.942 and Specificity 0.919 for Crown detection. Detection of Caries and Badly Decayed teeth remained challenging, with lower sensitivity scores of 0.174 and 0.355, respectively. Specificity across classes ranged from 0.361 (RCT) to 0.887 (Caries), indicating variable false positive rates. The proposed pipeline demonstrated clinical potential by improving subtle structural visibility and supporting automated dental assessment. Future work will explore class‑specific augmentation and explainability tools to increase clinical utility. ESRGAN significantly improved the resolution and clarity of dental X-rays, enabling better visualization of fine details for accurate diagnosis. YOLOv8 effectively identified six dental conditions, achieving high accuracy for distinct classes like crowns and restorations.
{"title":"Classification of periapical dental X-ray using the YOLOv8 deep learning model.","authors":"Archana Y Chaudhari, Prajwal Birwadkar, Sagar Joshi, Yash Verma, Rutuja Sindgi","doi":"10.1016/j.mex.2025.103721","DOIUrl":"10.1016/j.mex.2025.103721","url":null,"abstract":"<p><p>The Radiographs are essential in clinical dentistry as they provide information that is invisible during an oral inspection. These images, however suffer from excess noise, low resolution, and poor contrast which impacts diagnosis accuracy. This study presents a two‑stage pipeline combining Enhanced Super‑Resolution GAN (ESRGAN) for radiograph enhancement followed by YOLOv8 for multi‑class dental anomaly detection. The proposed method involves the application of ESRGAN (Enhanced super-resolution generative adversarial network with adaptive dual perceptual loss) that improves image detail and sharpen resolution. A customized YOLOv8 object detection model which is trained to detect object and classify into six important dental conditions. The classification is Caries, Crown, Root Canal Treated (RCT), Restoration, Normal, and Badly Decayed teeth. The ESRGAN-enhanced images demonstrated high visual fidelity, achieving a Peak Signal-to-Noise Ratio (PSNR) of 28.7 dB and a Structural Similarity Index (SSIM) of 0.91. The proposed YOLOv8 model analyzes the images after being enhanced by ESRGAN. The YOLOv8 model was evaluated on 100 test images and achieved an overall mean Average Precision (mAP@0.5) of 56.9 % and mAP@0.5:0.95 of 41.6 %. The proposed model achieved Sensitivity (Recall) 0.942 and Specificity 0.919 for Crown detection. Detection of Caries and Badly Decayed teeth remained challenging, with lower sensitivity scores of 0.174 and 0.355, respectively. Specificity across classes ranged from 0.361 (RCT) to 0.887 (Caries), indicating variable false positive rates. The proposed pipeline demonstrated clinical potential by improving subtle structural visibility and supporting automated dental assessment. Future work will explore class‑specific augmentation and explainability tools to increase clinical utility. ESRGAN significantly improved the resolution and clarity of dental X-rays, enabling better visualization of fine details for accurate diagnosis. YOLOv8 effectively identified six dental conditions, achieving high accuracy for distinct classes like crowns and restorations.</p>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"103721"},"PeriodicalIF":1.9,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12681648/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145708028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mobile Edge Computing (MEC) is an advanced technology that has the ability to decentralize and transform the working functionality of phone networks. The MEC is implanted in the cell phone base stations. The available resources in the mobile applications are processed by the MEC. Yet, the user experience and Quality of Service (QoS) are affected by the problem of inevitable optimization. A practical and efficient option to transfer workloads is MEC servers that are equipped with tiny or large base stations. By shifting the tasks from mobile devices to edge servers, MEC can provide low-latency computing services and high throughput. This research work aims at scheduling security-critical workflow tasks in the MEC environment that significantly improves the computing power of the devices by scheduling the service workflows from computing mobile devices to the edges of the mobile network. The major objective to be considered during the scheduling of critical tasks is the minimization of workflow execution time and total energy consumption. The major contributions of the recommended security-critical tasks scheduling approach in resource-limited MEC are listed here.
•
To present a security-critical tasks scheduling method in resource-limited MEC to improve the offloading performance and energy efficiency to reduce the latency issues.
•
To secure the security-critical features of the tasks to schedule the tasks in resource-limited MEC for enhancing the user experience as well as quality of service.
•
To schedule the security-critical tasks in resource-limited MEC using the developed HGR-GJOS. It helps to minimize the workflow execution time and total energy consumption of the devices by optimizing which of the tasks are assigned to which machine.
{"title":"An efficient framework for scheduling security-critical tasks in resource-limited mobile edge computing using hybridized gold rush with golden jackal optimization strategy","authors":"Kapil Vhatkar , Shweta Koparde , Neeta Deshpande , Sonali Kothari , Sonali Patil , Ranjana Kale , Madhavi Nimkar (Darokar) , Pooja Bagane","doi":"10.1016/j.mex.2025.103720","DOIUrl":"10.1016/j.mex.2025.103720","url":null,"abstract":"<div><div>Mobile Edge Computing (MEC) is an advanced technology that has the ability to decentralize and transform the working functionality of phone networks. The MEC is implanted in the cell phone base stations. The available resources in the mobile applications are processed by the MEC. Yet, the user experience and Quality of Service (QoS) are affected by the problem of inevitable optimization. A practical and efficient option to transfer workloads is MEC servers that are equipped with tiny or large base stations. By shifting the tasks from mobile devices to edge servers, MEC can provide low-latency computing services and high throughput. This research work aims at scheduling security-critical workflow tasks in the MEC environment that significantly improves the computing power of the devices by scheduling the service workflows from computing mobile devices to the edges of the mobile network. The major objective to be considered during the scheduling of critical tasks is the minimization of workflow execution time and total energy consumption. The major contributions of the recommended security-critical tasks scheduling approach in resource-limited MEC are listed here.<ul><li><span>•</span><span><div><em>To present a security-critical tasks scheduling method in resource-limited MEC to improve the offloading performance and energy efficiency to reduce the latency issues.</em></div></span></li><li><span>•</span><span><div><em>To secure the security-critical features of the tasks to schedule the tasks in resource-limited MEC for enhancing the user experience as well as quality of service.</em></div></span></li><li><span>•</span><span><div><em>To schedule the security-critical tasks in resource-limited MEC using the developed HGR-GJOS. It helps to minimize the workflow execution time and total energy consumption of the devices by optimizing which of the tasks are assigned to which machine.</em></div></span></li></ul></div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103720"},"PeriodicalIF":1.9,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145568588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13DOI: 10.1016/j.mex.2025.103724
Tin Zar Oo , Usa Wannasingha Humphries
Land use and land cover (LULC) change is a major anthropogenic factor influencing flood behavior and hydrological processes. This systematic review synthesizes two decades (2005–2025) of research on hydrological modeling approaches used to assess flood responses under LULC transitions. A total of 114 publications were retrieved from the Scopus database, and after applying PRISMA-based screening, 78 peer-reviewed studies were analyzed using bibliometric and content mapping. The review categorizes hydrological models by spatial scale, process representation, and sensitivity to LULC dynamics. Findings consistently indicate that urban expansion, deforestation, and vegetation loss intensify surface runoff, peak flow, and flood frequency. Despite advancements, significant challenges remain particularly related to data scarcity, model calibration, and the limited integration of socio-economic variables. Emerging tools such as Remote Sensing (RS), Geographic Information Systems (GIS), and machine learning especially within platforms like Google Earth Engine (GEE) enhance LULC detection accuracy and flood prediction capability. The study proposes an integrated decision framework linking bibliometric trends with model selection strategies, enabling researchers to align model choice with data availability and landscape characteristics. Overall, this review emphasizes the importance of interdisciplinary, data-driven modeling to strengthen flood resilience in rapidly transforming land systems.
{"title":"Hydrological modeling of flood impacts under land use and land cover change: A systematic review of tools, trends, and challenges","authors":"Tin Zar Oo , Usa Wannasingha Humphries","doi":"10.1016/j.mex.2025.103724","DOIUrl":"10.1016/j.mex.2025.103724","url":null,"abstract":"<div><div>Land use and land cover (LULC) change is a major anthropogenic factor influencing flood behavior and hydrological processes. This systematic review synthesizes two decades (2005–2025) of research on hydrological modeling approaches used to assess flood responses under LULC transitions. A total of 114 publications were retrieved from the Scopus database, and after applying PRISMA-based screening, 78 peer-reviewed studies were analyzed using bibliometric and content mapping. The review categorizes hydrological models by spatial scale, process representation, and sensitivity to LULC dynamics. Findings consistently indicate that urban expansion, deforestation, and vegetation loss intensify surface runoff, peak flow, and flood frequency. Despite advancements, significant challenges remain particularly related to data scarcity, model calibration, and the limited integration of socio-economic variables. Emerging tools such as Remote Sensing (RS), Geographic Information Systems (GIS), and machine learning especially within platforms like Google Earth Engine (GEE) enhance LULC detection accuracy and flood prediction capability. The study proposes an integrated decision framework linking bibliometric trends with model selection strategies, enabling researchers to align model choice with data availability and landscape characteristics. Overall, this review emphasizes the importance of interdisciplinary, data-driven modeling to strengthen flood resilience in rapidly transforming land systems.</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"16 ","pages":"Article 103724"},"PeriodicalIF":1.9,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145926355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kinesin-1 is a dimeric motor protein that moves toward the microtubule plus-end. However, its minimal motor domain—a single catalytic head—is sufficient to support directional motility in vitro, raising fundamental questions about how directionality and force generation are encoded within the motor domain. Here, we describe a method for tether-scanning the kinesin motor domain using an in vitro microtubule gliding assay. A cysteine-light kinesin-1 motor domain is covalently tethered to a glass surface through linkers differing in length and flexibility, such as PEG or DNA, attached at defined positions including the C-terminus or surface-exposed loops. Fluorescently-labelled microtubules glide over the kinesin-coated surface, allowing direct observation under fluorescence microscopy. By systematically altering tether geometry and mechanical properties, this method enables precise analysis of how spatial constraints affect motility parameters such as velocity and direction. The protocol has the potential to be adapted to other motor proteins, although such applications may require careful optimisation of labelling sites to preserve motor function. This approach provides a platform for studying the intrinsic motility of the motor domain.
• In vitro method to study how tether geometry affects kinesin-1 motility.
• Platform for analysing motor function; adaptable with careful optimisation.
{"title":"In vitro motility-based tether-scanning of the kinesin motor domain","authors":"Rieko Sumiyoshi , Masahiko Yamagishi , Junichiro Yajima","doi":"10.1016/j.mex.2025.103719","DOIUrl":"10.1016/j.mex.2025.103719","url":null,"abstract":"<div><div>Kinesin-1 is a dimeric motor protein that moves toward the microtubule plus-end. However, its minimal motor domain—a single catalytic head—is sufficient to support directional motility <em>in vitro</em>, raising fundamental questions about how directionality and force generation are encoded within the motor domain. Here, we describe a method for tether-scanning the kinesin motor domain using an <em>in vitro</em> microtubule gliding assay. A cysteine-light kinesin-1 motor domain is covalently tethered to a glass surface through linkers differing in length and flexibility, such as PEG or DNA, attached at defined positions including the C-terminus or surface-exposed loops. Fluorescently-labelled microtubules glide over the kinesin-coated surface, allowing direct observation under fluorescence microscopy. By systematically altering tether geometry and mechanical properties, this method enables precise analysis of how spatial constraints affect motility parameters such as velocity and direction. The protocol has the potential to be adapted to other motor proteins, although such applications may require careful optimisation of labelling sites to preserve motor function. This approach provides a platform for studying the intrinsic motility of the motor domain.</div><div>• <em>In vitro</em> method to study how tether geometry affects kinesin-1 motility.</div><div>• Platform for analysing motor function; adaptable with careful optimisation.</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103719"},"PeriodicalIF":1.9,"publicationDate":"2025-11-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145516625","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1016/j.mex.2025.103713
Adwitiya Mukhopadhyay , Divyashree D P , Ramya C A , Hijaz Ahmad , Taha Radwan , Soumik Das
The present study addresses the rising importance of mental health by devel oping a novel healthcare plan. We integrate physiological data from sensors, such as Heart Rate (HR) and Galvanic Skin Response (GSR), to predict and manage anxiety. These sensors provide non-invasive insights into the com plex relationship between physiological reactions and mental well-being. To analyze the collected data, we developed a novel algorithm, Regression Based Random Forest (RBRF). Using a large-scale dataset, we empirically validated the effectiveness of our approach, achieving an impressive 95 % accuracy in identifying anxiety. Our findings demonstrate the potential of sensor-based technologies and advanced algorithms to empower individuals to proactively monitor and manage their mental health. This approach holds significant promise for improving the precision and effectiveness of mental health care.
•
The study aims to improve mental healthcare by incorporating physiological data (Heart Rate and Galvanic Skin Response) to detect and potentially treat anxiety.
•
Employs a novel algorithm, Regression Based Random Forest (RBRF), to analyze the collected data and identify anxiety.
•
Achieved high accuracy (95 %) in identifying anxiety using the RBRF algorithm on a large dataset.
{"title":"Bio-signal induced emotion monitoring and detection of anxiety: A sensor-driven approach with regression based random forest","authors":"Adwitiya Mukhopadhyay , Divyashree D P , Ramya C A , Hijaz Ahmad , Taha Radwan , Soumik Das","doi":"10.1016/j.mex.2025.103713","DOIUrl":"10.1016/j.mex.2025.103713","url":null,"abstract":"<div><div>The present study addresses the rising importance of mental health by devel oping a novel healthcare plan. We integrate physiological data from sensors, such as Heart Rate (HR) and Galvanic Skin Response (GSR), to predict and manage anxiety. These sensors provide non-invasive insights into the com plex relationship between physiological reactions and mental well-being. To analyze the collected data, we developed a novel algorithm, Regression Based Random Forest (RBRF). Using a large-scale dataset, we empirically validated the effectiveness of our approach, achieving an impressive 95 % accuracy in identifying anxiety. Our findings demonstrate the potential of sensor-based technologies and advanced algorithms to empower individuals to proactively monitor and manage their mental health. This approach holds significant promise for improving the precision and effectiveness of mental health care.<ul><li><span>•</span><span><div>The study aims to improve mental healthcare by incorporating physiological data (Heart Rate and Galvanic Skin Response) to detect and potentially treat anxiety.</div></span></li><li><span>•</span><span><div>Employs a novel algorithm, Regression Based Random Forest (RBRF), to analyze the collected data and identify anxiety.</div></span></li><li><span>•</span><span><div>Achieved high accuracy (95 %) in identifying anxiety using the RBRF algorithm on a large dataset.</div></span></li></ul></div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103713"},"PeriodicalIF":1.9,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145568589","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-10DOI: 10.1016/j.mex.2025.103718
Wisnowan Hendy Saputra , Rinda Nariswari , Matthew Owen
Recurrent Neural Networks (RNNs), particularly their Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) variants, are standard methods for modeling sequential data. However, their robustness is often limited when faced with non-stationary and heterogeneous time series data. This limitation is largely due to their reliance on symmetric loss functions such as mean squared error, which implicitly assume homogeneous data patterns. To address this, we propose a new framework, Expectile-based Recurrent Neural Network (E-RNN), which integrates expectile regression into the RNN architecture. We implement and compare two E-RNN variants, namely E-LSTM and E-GRU, to obtain the best forecast. . By leveraging the asymmetric least squares loss function, the E-RNN model is able to model various parts of the conditional data distribution, not just its central tendency. This allows forecasting across scenarios, ranging from pessimistic to optimistic, by adjusting the asymmetric parameter (τ), a value within the range (0, 1) where τ〈 0.5 yields pessimistic and τ〉 0.5 yields optimistic forecasts.. We demonstrate this methodology by forecasting Indonesia's quarterly economic growth data from 2001 to 2025. Empirical results show that the E-RNN model consistently exhibits superior performance, evidenced by lower Expectile-based Generalized Approximate Cross Validation (EGACV) scores for model selection and higher forecast accuracy. This superiority becomes particularly significant on more volatile quarter-to-quarter (qtq) data, highlighting the effectiveness of this framework in adapting to complex data dynamics and improving forecast reliability under uncertain conditions.
• Integrates expectile properties into RNN architectures to create models that are adaptive to changes in data distribution and are not tied to the homogeneity assumption.
• Introduces a robust model selection criterion: Expectile-based Generalized Approximate Cross Validation (EGACV). This criterion effectively balances model fit with complexity within an expectile framework..
• Generates a set of forecasts for various outcome scenarios (e.g., pessimistic, optimistic) by adjusting a single asymmetric parameter , moving beyond single-point estimation.
{"title":"On the recurrent neural network model with robust expectile-based loss function in economic data forecasting","authors":"Wisnowan Hendy Saputra , Rinda Nariswari , Matthew Owen","doi":"10.1016/j.mex.2025.103718","DOIUrl":"10.1016/j.mex.2025.103718","url":null,"abstract":"<div><div>Recurrent Neural Networks (RNNs), particularly their Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU) variants, are standard methods for modeling sequential data. However, their robustness is often limited when faced with non-stationary and heterogeneous time series data. This limitation is largely due to their reliance on symmetric loss functions such as mean squared error, which implicitly assume homogeneous data patterns. To address this, we propose a new framework, Expectile-based Recurrent Neural Network (E-RNN), which integrates expectile regression into the RNN architecture. We implement and compare two E-RNN variants, namely E-LSTM and E-GRU, to obtain the best forecast. . By leveraging the asymmetric least squares loss function, the E-RNN model is able to model various parts of the conditional data distribution, not just its central tendency. This allows forecasting across scenarios, ranging from pessimistic to optimistic, by adjusting the asymmetric parameter (<em>τ</em>), a value within the range (0, 1) where <em>τ</em>〈 0.5 yields pessimistic and <em>τ</em>〉 0.5 yields optimistic forecasts.. We demonstrate this methodology by forecasting Indonesia's quarterly economic growth data from 2001 to 2025. Empirical results show that the E-RNN model consistently exhibits superior performance, evidenced by lower Expectile-based Generalized Approximate Cross Validation (EGACV) scores for model selection and higher forecast accuracy. This superiority becomes particularly significant on more volatile quarter-to-quarter (qtq) data, highlighting the effectiveness of this framework in adapting to complex data dynamics and improving forecast reliability under uncertain conditions.</div><div>• Integrates expectile properties into RNN architectures to create models that are adaptive to changes in data distribution and are not tied to the homogeneity assumption.</div><div>• Introduces a robust model selection criterion: Expectile-based Generalized Approximate Cross Validation (EGACV). This criterion effectively balances model fit with complexity within an expectile framework..</div><div>• Generates a set of forecasts for various outcome scenarios (e.g., pessimistic, optimistic) by adjusting a single asymmetric parameter <span><math><mrow><mo>(</mo><mi>τ</mi><mo>)</mo></mrow></math></span>, moving beyond single-point estimation.</div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103718"},"PeriodicalIF":1.9,"publicationDate":"2025-11-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145516618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-09DOI: 10.1016/j.mex.2025.103714
Brianna J. Jamison, Rekha Pandey, Matheus Morais, Amanda A. Cardoso, Kevin Garcia
Upland cotton (Gossypium hirsutum L.) is a major crop in the United States. Understanding how cotton roots develop and respond to abiotic and biotic factors is crucial for improving nutrient acquisition, enhancing crop resilience under stress, and optimizing overall crop production. Split-root techniques have been developed for numerous plant species, providing a controlled framework for monitoring root development, and investigating systemic and local plant responses to various environmental factors. However, a standardized cotton-specific protocol optimized for laboratory studies has yet to be established. This protocol facilitates the rapid establishment of split-root systems in eight upland cotton varieties within four weeks after germination. This is accomplished by cutting the primary root and immediately transplanting the seedlings into hydroponic conditions to promote lateral root growth, after which the root system can be divided equally into separate compartments. Once established, each compartment can be subjected to different, independent treatments. This method was validated across all eight varieties by quantifying the difference in root dry weight between the two halves of each plant's root system and analyzing those differences across varieties. Statistical analysis was performed and Kruskal-Wallis and Wilcoxon signed-rant tests confirmed no significant difference between the roots of the two sides for any cultivar, thus confirming this method's reliability.
-
We developed a standardized split-root protocol tailored for upland cotton using hydroponics.
-
This protocol was performed on eight varieties within four weeks after germination.
-
We validated the method by comparing root biomass distribution between compartments to confirm reliability.
{"title":"Establishment of a rapid split-root assay in hydroponic conditions for eight upland cotton varieties","authors":"Brianna J. Jamison, Rekha Pandey, Matheus Morais, Amanda A. Cardoso, Kevin Garcia","doi":"10.1016/j.mex.2025.103714","DOIUrl":"10.1016/j.mex.2025.103714","url":null,"abstract":"<div><div>Upland cotton (<em>Gossypium hirsutum</em> L.) is a major crop in the United States. Understanding how cotton roots develop and respond to abiotic and biotic factors is crucial for improving nutrient acquisition, enhancing crop resilience under stress, and optimizing overall crop production. Split-root techniques have been developed for numerous plant species, providing a controlled framework for monitoring root development, and investigating systemic and local plant responses to various environmental factors. However, a standardized cotton-specific protocol optimized for laboratory studies has yet to be established. This protocol facilitates the rapid establishment of split-root systems in eight upland cotton varieties within four weeks after germination. This is accomplished by cutting the primary root and immediately transplanting the seedlings into hydroponic conditions to promote lateral root growth, after which the root system can be divided equally into separate compartments. Once established, each compartment can be subjected to different, independent treatments. This method was validated across all eight varieties by quantifying the difference in root dry weight between the two halves of each plant's root system and analyzing those differences across varieties. Statistical analysis was performed and Kruskal-Wallis and Wilcoxon signed-rant tests confirmed no significant difference between the roots of the two sides for any cultivar, thus confirming this method's reliability.<ul><li><span>-</span><span><div>We developed a standardized split-root protocol tailored for upland cotton using hydroponics.</div></span></li><li><span>-</span><span><div>This protocol was performed on eight varieties within four weeks after germination.</div></span></li><li><span>-</span><span><div>We validated the method by comparing root biomass distribution between compartments to confirm reliability.</div></span></li></ul></div></div>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"Article 103714"},"PeriodicalIF":1.9,"publicationDate":"2025-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145516619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-09eCollection Date: 2025-12-01DOI: 10.1016/j.mex.2025.103717
Kaiyin Kelly Zhang, Dominic Locurto, Matthew Belden, Mark Price, Cathal J Kearney, Meghan E Huber
Temperature treatment is commonly used to manipulate circadian rhythms in cells and tissue cultures. However, it is often laborious and error-prone in prolonged studies. We present the ThermoClock, an Arduino-based temperature regulation system designed for precise, automated temperature control in ex vivo and in vitro studies, particularly circadian rhythm research. Built with off-the-shelf components and open-source software, ThermoClock is easy to fabricate, costing approximately $450 and requiring under 10 h to assemble. Its modular design enables simultaneous control of multiple conditions, reducing manual intervention and user error. Individual ThermoClock modules use a Proportional-Integral-Derivative (PID) controller and off-the-shelf electronics to realize real time, precise temperature controls, while being cost-friendly and accessible to construct and operate. Assembled ThermoClock can operate up to five temperature modules, greatly enhancing experimental versatility and throughput. An Arduino script is provided to automate the temperature controls based on user-input temperature setpoint schedules. ThermoClock is designed to function in an incubator and shows significantly faster heating and cooling (p < 0.001) compared to a programmable incubator. It reaches the target temperature within five minutes after a setpoint change.
{"title":"Around the <i>ThermoClock</i>: A precision automated temperature control system for Ex Vivo circadian studies.","authors":"Kaiyin Kelly Zhang, Dominic Locurto, Matthew Belden, Mark Price, Cathal J Kearney, Meghan E Huber","doi":"10.1016/j.mex.2025.103717","DOIUrl":"10.1016/j.mex.2025.103717","url":null,"abstract":"<p><p>Temperature treatment is commonly used to manipulate circadian rhythms in cells and tissue cultures. However, it is often laborious and error-prone in prolonged studies. We present the <i>ThermoClock</i>, an Arduino-based temperature regulation system designed for precise, automated temperature control in ex vivo and in vitro studies, particularly circadian rhythm research. Built with off-the-shelf components and open-source software, ThermoClock is easy to fabricate, costing approximately $450 and requiring under 10 h to assemble. Its modular design enables simultaneous control of multiple conditions, reducing manual intervention and user error. Individual ThermoClock modules use a Proportional-Integral-Derivative (PID) controller and off-the-shelf electronics to realize real time, precise temperature controls, while being cost-friendly and accessible to construct and operate. Assembled ThermoClock can operate up to five temperature modules, greatly enhancing experimental versatility and throughput. An Arduino script is provided to automate the temperature controls based on user-input temperature setpoint schedules. ThermoClock is designed to function in an incubator and shows significantly faster heating and cooling (<i>p</i> < 0.001) compared to a programmable incubator. It reaches the target temperature within five minutes after a setpoint change.</p>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"103717"},"PeriodicalIF":1.9,"publicationDate":"2025-11-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12670885/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145668847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Smoking remains a persistent global health concern with complex behavioral dynamics influenced by memory and past experiences. This study formulates and analyses a fractional-order mathematical model of smoking behavior using the Caputo derivative to capture memory effects and non-local interactions. The well-posedness of the model is ensured through rigorous proofs of existence and uniqueness of solutions. To assess the system's resilience, Hyers-Ulam-Rassias stability is investigated under small perturbations. To address potential chaotic behavior, we implement chaos control techniques, stabilizing the system for reliable long-term predictions. A novel Newton polynomial-based numerical scheme is developed to efficiently approximate solutions, validated through extensive simulations. Our results demonstrate that fractional-order modeling provides deeper insights into smoking dynamics compared to classical approaches. Some key features of the proposed method include:•Investigating Hyers-Ulam-Rassias stability to analyze robustness against perturbations.•Applying chaos control techniques to manage and stabilize chaotic system behavior.•Developing and implementing a Newton polynomial-based numerical scheme for efficient solution approximation.
{"title":"Smoking dynamics with media awareness to control the prevalence of bad effect through fractional operator study.","authors":"Muhammad Farman, Cicik Alfiniyah, Khadija Jamil, Aceng Sambas, Nashrul Millah, Ahmadin","doi":"10.1016/j.mex.2025.103710","DOIUrl":"10.1016/j.mex.2025.103710","url":null,"abstract":"<p><p>Smoking remains a persistent global health concern with complex behavioral dynamics influenced by memory and past experiences. This study formulates and analyses a fractional-order mathematical model of smoking behavior using the Caputo derivative to capture memory effects and non-local interactions. The well-posedness of the model is ensured through rigorous proofs of existence and uniqueness of solutions. To assess the system's resilience, Hyers-Ulam-Rassias stability is investigated under small perturbations. To address potential chaotic behavior, we implement chaos control techniques, stabilizing the system for reliable long-term predictions. A novel Newton polynomial-based numerical scheme is developed to efficiently approximate solutions, validated through extensive simulations. Our results demonstrate that fractional-order modeling provides deeper insights into smoking dynamics compared to classical approaches. Some key features of the proposed method include:•Investigating Hyers-Ulam-Rassias stability to analyze robustness against perturbations.•Applying chaos control techniques to manage and stabilize chaotic system behavior.•Developing and implementing a Newton polynomial-based numerical scheme for efficient solution approximation.</p>","PeriodicalId":18446,"journal":{"name":"MethodsX","volume":"15 ","pages":"103710"},"PeriodicalIF":1.9,"publicationDate":"2025-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12664066/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145648926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}