Pub Date : 2026-01-02eCollection Date: 2026-01-01DOI: 10.12688/openreseurope.21452.1
Erich Griessler, Maria Alonso Raposo, Lucia Cristea, Floridea Di Ciommo, Elisabeth Frankus, Liliana Denisa Andrei, Shauna Stack
Background: Different forms of participation have been employed to engage citizens in the planning of climate change mitigation and adaptation strategies. Arguments in favor of citizen participation highlight the limitations of traditional democratic practices to address climate change. Climate Assemblies (CAs), a form of deliberative democracy, have become an increasingly popular way for citizens and politicians to collaborate on climate decision-making.
Research questions: Using a mixed methods approach, this paper poses three questions. (1) To what extent do European cities and regions engage in CAs, and how are they embedded in policymaking? (2) What drives and impedes CAs? (3) To what extent are policymakers in European cities and regions ready and able to incorporate CAs and their results into policies?
Results: Findings reveal an increase in CAs in Europe on different levels, primarily commissioned by public authorities. However, the connection between CAs and policymaking differs across countries. Research revealed the significance of political culture, the specific roles of citizens, policymakers and administration therein, and the importance of political backing of CAs. Important drivers of CAs include measures that safeguard relevance to citizens, equality, inclusive access, and impact. Barriers include knowledge about climate change and deliberative democracy, lacking inclusiveness of CAs and asymmetry in political power. Survey data shows that climate policies have become established practices in many European cities and regions and that various engagement approaches are used to develop them. However, only 9.4% of respondents stated that city officials developed climate change policies with stakeholder input, including citizens. Citizen participation is infrequent, and involvement in policy development and implementation is unequally distributed, favoring some groups over others. While some results of stakeholder and citizen engagement activities were adopted, recommendations were not always translated into policies.
Conclusions: Currently, CAs are rather an exception than the norm across Europe.
{"title":"Citizen participation in climate politics. Drivers and barriers of Climate Assemblies in Europe.","authors":"Erich Griessler, Maria Alonso Raposo, Lucia Cristea, Floridea Di Ciommo, Elisabeth Frankus, Liliana Denisa Andrei, Shauna Stack","doi":"10.12688/openreseurope.21452.1","DOIUrl":"https://doi.org/10.12688/openreseurope.21452.1","url":null,"abstract":"<p><strong>Background: </strong>Different forms of participation have been employed to engage citizens in the planning of climate change mitigation and adaptation strategies. Arguments in favor of citizen participation highlight the limitations of traditional democratic practices to address climate change. Climate Assemblies (CAs), a form of deliberative democracy, have become an increasingly popular way for citizens and politicians to collaborate on climate decision-making.</p><p><strong>Research questions: </strong>Using a mixed methods approach, this paper poses three questions. (1) To what extent do European cities and regions engage in CAs, and how are they embedded in policymaking? (2) What drives and impedes CAs? (3) To what extent are policymakers in European cities and regions ready and able to incorporate CAs and their results into policies?</p><p><strong>Results: </strong>Findings reveal an increase in CAs in Europe on different levels, primarily commissioned by public authorities. However, the connection between CAs and policymaking differs across countries. Research revealed the significance of political culture, the specific roles of citizens, policymakers and administration therein, and the importance of political backing of CAs. Important drivers of CAs include measures that safeguard relevance to citizens, equality, inclusive access, and impact. Barriers include knowledge about climate change and deliberative democracy, lacking inclusiveness of CAs and asymmetry in political power. Survey data shows that climate policies have become established practices in many European cities and regions and that various engagement approaches are used to develop them. However, only 9.4% of respondents stated that city officials developed climate change policies with stakeholder input, including citizens. Citizen participation is infrequent, and involvement in policy development and implementation is unequally distributed, favoring some groups over others. While some results of stakeholder and citizen engagement activities were adopted, recommendations were not always translated into policies.</p><p><strong>Conclusions: </strong>Currently, CAs are rather an exception than the norm across Europe.</p>","PeriodicalId":74359,"journal":{"name":"Open research Europe","volume":"6 ","pages":"3"},"PeriodicalIF":0.0,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873538/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146144859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-02eCollection Date: 2026-01-01DOI: 10.12688/openreseurope.22116.1
Francisco Javier Iriarte, Beatrice Azoubel, Adrián Carrizo-Pérez, Andrés Chica Linares, Luis Unzueta, Ignacio Arganda-Carreras
Background: Maritime activity is expanding globally, increasing the demand for robust port security systems capable of detecting illegal trafficking. Due to the growing sophistication of smuggling methods, law enforcement agencies require advanced surveillance and prevention technologies such as those developed in the SMAUG project. In this context, initiatives such as the SMAUG project aim to deliver integrated surveillance capabilities coordinated by a high-level deep reinforcement learning (DRL) decision-making system that operates on image-based environmental representations. Despite their effectiveness, DRL models are closed-boxes, complicating continuous model monitoring (CMM). Conventional drift detection captures shifts in input or output distributions yet often fails to explain underlying problems. Explainable AI (XAI) techniques can provide a complementary approach with insights into the agent's inner workings, enabling monitoring of the concept rather than just the data.
Methods: We propose FADMON, an XAI-driven concept drift detection method for image-based models. FADMON performs statistical drift tests on feature attributions to detect deviations in learned policies. We demonstrate how FADMON can enhance CMM with a three-stage model monitoring architecture that enables semi-supervised explainable model monitoring. We validate our approach with SMAUG's decision-making DRL model on a simulated maritime port surveillance environment under multiple unforeseen scenarios.
Results: FADMON consistently flags drift on all drifted scenarios with mean p-values of 0.000 with no variance trough 30 repetitions, with lower mean p-values (0.553±0.215) on non-drifted scenarios with respect to other established drift detection methodologies such as prior probability shift detection (0.65 ± 0.000), though well above the standard 0.05 threshold.
Conclusions: FADMON can add an explainability layer to the monitoring system while also supporting detection of changes in the underlying interpretation of the input data by the model, monitoring the concept rather than the data, while matching established drift detection methods metrics-wise.
{"title":"Drift detection on feature attributions for monitoring visual reinforcement learning models in maritime port surveillance.","authors":"Francisco Javier Iriarte, Beatrice Azoubel, Adrián Carrizo-Pérez, Andrés Chica Linares, Luis Unzueta, Ignacio Arganda-Carreras","doi":"10.12688/openreseurope.22116.1","DOIUrl":"10.12688/openreseurope.22116.1","url":null,"abstract":"<p><strong>Background: </strong>Maritime activity is expanding globally, increasing the demand for robust port security systems capable of detecting illegal trafficking. Due to the growing sophistication of smuggling methods, law enforcement agencies require advanced surveillance and prevention technologies such as those developed in the SMAUG project. In this context, initiatives such as the SMAUG project aim to deliver integrated surveillance capabilities coordinated by a high-level deep reinforcement learning (DRL) decision-making system that operates on image-based environmental representations. Despite their effectiveness, DRL models are closed-boxes, complicating continuous model monitoring (CMM). Conventional drift detection captures shifts in input or output distributions yet often fails to explain underlying problems. Explainable AI (XAI) techniques can provide a complementary approach with insights into the agent's inner workings, enabling monitoring of the concept rather than just the data.</p><p><strong>Methods: </strong>We propose FADMON, an XAI-driven concept drift detection method for image-based models. FADMON performs statistical drift tests on feature attributions to detect deviations in learned policies. We demonstrate how FADMON can enhance CMM with a three-stage model monitoring architecture that enables semi-supervised explainable model monitoring. We validate our approach with SMAUG's decision-making DRL model on a simulated maritime port surveillance environment under multiple unforeseen scenarios.</p><p><strong>Results: </strong>FADMON consistently flags drift on all drifted scenarios with mean p-values of 0.000 with no variance trough 30 repetitions, with lower mean p-values (0.553±0.215) on non-drifted scenarios with respect to other established drift detection methodologies such as prior probability shift detection (0.65 ± 0.000), though well above the standard 0.05 threshold.</p><p><strong>Conclusions: </strong>FADMON can add an explainability layer to the monitoring system while also supporting detection of changes in the underlying interpretation of the input data by the model, monitoring the concept rather than the data, while matching established drift detection methods metrics-wise.</p>","PeriodicalId":74359,"journal":{"name":"Open research Europe","volume":"6 ","pages":"2"},"PeriodicalIF":0.0,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12859422/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146108982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-26eCollection Date: 2023-01-01DOI: 10.12688/openreseurope.15789.3
Alexander Goscinski, Christian A Jorgensen, Victor Paul Principe, Guillaume Fraux, Sergei Kliavinek, Benjamin Aaron Helfrecht, Rhushil Vasavada, Philip Loche, Michele Ceriotti, Rose Kathleen Cersonsky
Easy-to-use libraries such as scikit-learn have accelerated the adoption and application of machine learning (ML) workflows and data-driven methods. While many of the algorithms implemented in these libraries originated in specific scientific fields, they have gained in popularity in part because of their generalisability across multiple domains. Over the past two decades, researchers in the chemical and materials science community have put forward general-purpose machine learning methods. The deployment of these methods into workflows of other domains, however, is often burdensome due to the entanglement with domain-specific functionalities. We present the python library scikit-matter that targets domain-agnostic implementations of methods developed in the computational chemical and materials science community, following the scikit-learn API and coding guidelines to promote usability and interoperability with existing workflows.
{"title":"scikit-matter : A Suite of Generalisable Machine Learning Methods Born out of Chemistry and Materials Science.","authors":"Alexander Goscinski, Christian A Jorgensen, Victor Paul Principe, Guillaume Fraux, Sergei Kliavinek, Benjamin Aaron Helfrecht, Rhushil Vasavada, Philip Loche, Michele Ceriotti, Rose Kathleen Cersonsky","doi":"10.12688/openreseurope.15789.3","DOIUrl":"10.12688/openreseurope.15789.3","url":null,"abstract":"<p><p>Easy-to-use libraries such as scikit-learn have accelerated the adoption and application of machine learning (ML) workflows and data-driven methods. While many of the algorithms implemented in these libraries originated in specific scientific fields, they have gained in popularity in part because of their generalisability across multiple domains. Over the past two decades, researchers in the chemical and materials science community have put forward general-purpose machine learning methods. The deployment of these methods into workflows of other domains, however, is often burdensome due to the entanglement with domain-specific functionalities. We present the python library scikit-matter that targets domain-agnostic implementations of methods developed in the computational chemical and materials science community, following the scikit-learn API and coding guidelines to promote usability and interoperability with existing workflows.</p>","PeriodicalId":74359,"journal":{"name":"Open research Europe","volume":"3 ","pages":"81"},"PeriodicalIF":0.0,"publicationDate":"2025-12-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10792272/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145851717","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-24eCollection Date: 2025-01-01DOI: 10.12688/openreseurope.21354.2
Petra Bačová, Gonzalo González Huarte, Vagelis Harmandaris, Sergio I Molina
Background: This study introduces a systematic coarse-graining approach to model poly(ε-caprolactone) (PCL) in its melt state. The primary goal is to provide a simple and adaptable method for creating computational models of biodegradable polymers, which can then be used to study materials with a wide range of molecular weights and compositions that are relevant to industry. This research addresses the growing need for sustainable materials across various industrial applications.
Methods: To study long polymer chains, the L-OPLS force field, an adapted version of the OPLS-AA force field, was used for atomistic simulations. The data from these simulations were first thoroughly checked against existing literature and theoretical predictions to ensure their validity. These validated atomistic configurations then became the foundation for developing the coarse-grained model.
Results: The research meticulously measured both the structural and dynamic properties of the PCL at the atomistic and coarse-grained levels. The findings show that the model is successful at accurately reproducing key characteristics across these different levels of resolution.
Conclusions: The methodology presented in this work aims to facilitate the development of computational studies that can help optimize the properties of PCL-based materials. By doing so, it has the potential to reduce the environmental and economic impact of developing new sustainable materials.
{"title":"Development of a systematic coarse-grained model for poly(ε-caprolactone) in melt.","authors":"Petra Bačová, Gonzalo González Huarte, Vagelis Harmandaris, Sergio I Molina","doi":"10.12688/openreseurope.21354.2","DOIUrl":"10.12688/openreseurope.21354.2","url":null,"abstract":"<p><strong>Background: </strong>This study introduces a systematic coarse-graining approach to model poly(ε-caprolactone) (PCL) in its melt state. The primary goal is to provide a simple and adaptable method for creating computational models of biodegradable polymers, which can then be used to study materials with a wide range of molecular weights and compositions that are relevant to industry. This research addresses the growing need for sustainable materials across various industrial applications.</p><p><strong>Methods: </strong>To study long polymer chains, the L-OPLS force field, an adapted version of the OPLS-AA force field, was used for atomistic simulations. The data from these simulations were first thoroughly checked against existing literature and theoretical predictions to ensure their validity. These validated atomistic configurations then became the foundation for developing the coarse-grained model.</p><p><strong>Results: </strong>The research meticulously measured both the structural and dynamic properties of the PCL at the atomistic and coarse-grained levels. The findings show that the model is successful at accurately reproducing key characteristics across these different levels of resolution.</p><p><strong>Conclusions: </strong>The methodology presented in this work aims to facilitate the development of computational studies that can help optimize the properties of PCL-based materials. By doing so, it has the potential to reduce the environmental and economic impact of developing new sustainable materials.</p>","PeriodicalId":74359,"journal":{"name":"Open research Europe","volume":"5 ","pages":"296"},"PeriodicalIF":0.0,"publicationDate":"2025-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12521900/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145310316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-22eCollection Date: 2025-01-01DOI: 10.12688/openreseurope.21815.1
Ellie Nelson, Dustin White, Lucy Wheeler, Stefan Meng, Marcin Szymanek, Jaqueline Strahl, Michael Hein, Witold P Alexandrowicz, Brigitte Urban, Samantha Greeves, Mareike Stahlschmidt, Ralf-Dietrich Kahlke, Tobias Lauer, David Colin Tanner, Kirsty E H Penkman
The eastern North European Plain is an important area for studying Quaternary climate change and archaeology; however, providing chronological constraints for deposits can be challenging. Amino acid geochronology (AAG) is a relative dating technique that has been useful in correlating isolated Quaternary deposits. The intra-crystalline protein decomposition (IcPD) approach to AAG using the opercula of Bithynia snails has previously been used to provide relative dating frameworks across northern and central Europe in areas where the integrated diagenetic temperature can be assumed to be similar. Here, the first aminostratigraphies for the eastern North European Plain are presented, incorporating deposits from at least the last ~1 Ma, which are used to assess the current age attributions to Middle and Late Pleistocene interglacials. These aminostratigraphies are then used to explore expected differences in the extent of IcPD due to differing temperature histories across the study area. Correlations of opercula to regional pollen assemblages representative of the Holsteinian, Eemian and Holocene are used to evaluate the temporal resolution achievable by IcPD within a given interglacial. This work has produced four new aminostratigraphies that can now be used as reference datasets for relative age estimation for the late Middle Pleistocene to the Holocene in the eastern North European Plain.
{"title":"Quaternary aminostratigraphies for the eastern North European Plain.","authors":"Ellie Nelson, Dustin White, Lucy Wheeler, Stefan Meng, Marcin Szymanek, Jaqueline Strahl, Michael Hein, Witold P Alexandrowicz, Brigitte Urban, Samantha Greeves, Mareike Stahlschmidt, Ralf-Dietrich Kahlke, Tobias Lauer, David Colin Tanner, Kirsty E H Penkman","doi":"10.12688/openreseurope.21815.1","DOIUrl":"10.12688/openreseurope.21815.1","url":null,"abstract":"<p><p>The eastern North European Plain is an important area for studying Quaternary climate change and archaeology; however, providing chronological constraints for deposits can be challenging. Amino acid geochronology (AAG) is a relative dating technique that has been useful in correlating isolated Quaternary deposits. The intra-crystalline protein decomposition (IcPD) approach to AAG using the opercula of <i>Bithynia</i> snails has previously been used to provide relative dating frameworks across northern and central Europe in areas where the integrated diagenetic temperature can be assumed to be similar. Here, the first aminostratigraphies for the eastern North European Plain are presented, incorporating deposits from at least the last ~1 Ma, which are used to assess the current age attributions to Middle and Late Pleistocene interglacials. These aminostratigraphies are then used to explore expected differences in the extent of IcPD due to differing temperature histories across the study area. Correlations of opercula to regional pollen assemblages representative of the Holsteinian, Eemian and Holocene are used to evaluate the temporal resolution achievable by IcPD within a given interglacial. This work has produced four new aminostratigraphies that can now be used as reference datasets for relative age estimation for the late Middle Pleistocene to the Holocene in the eastern North European Plain.</p>","PeriodicalId":74359,"journal":{"name":"Open research Europe","volume":"5 ","pages":"396"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12811728/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145999781","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-22eCollection Date: 2025-01-01DOI: 10.12688/openreseurope.21672.1
Alejandro Manzano-Marín, Astrid Böhne, Rita Monteiro, Thomas Marcussen, Torsten H Struck, Rebekah A Oomen, Caroline Howard, Kerstin Howe, Mark Blaxter, Shane McCarthy, Jonathan M D Wood, Fergal Martin, Anna Lazar, Leanne Haggerty, Chiara Bortoluzzi
Hirudo verbana Carena, 1820, commonly known as the southern medicinal leech, is one of several European medicinal leeches, whose full diversity has just recently started to be uncovered. Historically, it has been widely used as a medicinal leech and for centuries it was treated erroneously under the specific name of Hirudo medicinalis L. 1758. Recent molecular and taxonomic analyses have revealed subspecific diversity within the morphospecies H. verbana. Hirudo verbana is a blood-feeding species sucking blood from amphibians, fish, and mammals. It occupies freshwater habitats, typically shallow ponds and lakes. Studies show that this leech species has a "naturally limited microbiome", suggesting it may serve as a powerful model system for the study of gut microbiota. We expect this chromosome-level assembly of H. verbana to serve as a high-quality genomic resource for this most famous leech genus and to serve as a foundation to the study of the diversification and biodiversity of European medicinal leeches, as well as their gut-associated symbionts. The genome of H. verbana was assembled into two haplotypes through a phased assembly approach; however, only the primary haplotype was designated as the reference genome for annotation and downstream analyses. The entirety of the primary haplotype was assembled into 14 contiguous chromosomal pseudomolecules, including the mitogenome. This chromosome-level assembly encompasses 0.18 Gb, composed of 277 contigs and 27 scaffolds, with contig and scaffold N50 values of 1.3 Mb and 13.4 Mb, respectively.
{"title":"ERGA-BGE reference genome of <i>Hirudo verbana,</i> a once neglected freshwater haematophagous European medicinal leech.","authors":"Alejandro Manzano-Marín, Astrid Böhne, Rita Monteiro, Thomas Marcussen, Torsten H Struck, Rebekah A Oomen, Caroline Howard, Kerstin Howe, Mark Blaxter, Shane McCarthy, Jonathan M D Wood, Fergal Martin, Anna Lazar, Leanne Haggerty, Chiara Bortoluzzi","doi":"10.12688/openreseurope.21672.1","DOIUrl":"10.12688/openreseurope.21672.1","url":null,"abstract":"<p><p><i>Hirudo verbana</i> Carena, 1820, commonly known as the southern medicinal leech, is one of several European medicinal leeches, whose full diversity has just recently started to be uncovered. Historically, it has been widely used as a medicinal leech and for centuries it was treated erroneously under the specific name of <i>Hirudo medicinalis</i> L. 1758. Recent molecular and taxonomic analyses have revealed subspecific diversity within the morphospecies <i>H. verbana</i>. <i>Hirudo verbana</i> is a blood-feeding species sucking blood from amphibians, fish, and mammals. It occupies freshwater habitats, typically shallow ponds and lakes. Studies show that this leech species has a \"naturally limited microbiome\", suggesting it may serve as a powerful model system for the study of gut microbiota. We expect this chromosome-level assembly of <i>H. verbana</i> to serve as a high-quality genomic resource for this most famous leech genus and to serve as a foundation to the study of the diversification and biodiversity of European medicinal leeches, as well as their gut-associated symbionts. The genome of <i>H. verbana</i> was assembled into two haplotypes through a phased assembly approach; however, only the primary haplotype was designated as the reference genome for annotation and downstream analyses. The entirety of the primary haplotype was assembled into 14 contiguous chromosomal pseudomolecules, including the mitogenome. This chromosome-level assembly encompasses 0.18 Gb, composed of 277 contigs and 27 scaffolds, with contig and scaffold N50 values of 1.3 Mb and 13.4 Mb, respectively.</p>","PeriodicalId":74359,"journal":{"name":"Open research Europe","volume":"5 ","pages":"395"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12820476/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146031989","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-22eCollection Date: 2025-01-01DOI: 10.12688/openreseurope.22121.1
Francseco Nasi
Background: Political parties across liberal democracies face a persistent crisis of legitimacy, representation, and membership. In response, scholars and practitioners proposed a range of deliberative reforms aimed at making parties more internally democratic. Yet such innovations have proven difficult to implement due to structural features inherent to political parties, including hierarchical organization and electoral imperatives. Similarly, digital platforms promised to revolutionize internal democracy but largely disappointed expectations. This impasse highlights the need for lighter forms of democratic engagement that may better align with the operational realities of parties. Among these alternatives, digital crowdsourcing emerges as a possible path forward. Digital crowdsourcing refers to processes in which organizations use technology to tap into people's distributed knowledge, combining bottom-up input with top-down coordination to solve problems, carry out tasks, or generate ideas.
Methods: This theoretical paper develops an analytical framework tailored to the organizational and democratic specificities of political parties. I propose a typology of digital crowdsourcing for parties based on two dimensions (policy impact and power structure) yielding four forms: vertical, performative, expressive, and democratic crowdsourcing.
Results: Thanks to this typology, I identify three core opportunities associated with the adoption of these tools: enhanced democratic participation, increased flexibility, and improved accessibility for members and supporters. Conversely, I outline three central challenges: tensions between inclusion and exclusion, risks of elite capture, and conflicts between competing sources of democratic legitimacy. Finally, I present a set of strategies for achieving a feasible democratic crowdsourcing in political parties.
Conclusion: Integrating digital democratic innovations into political parties (especially long-established ones) remains particularly challenging. However, lighter forms of participation, such as digital crowdsourcing, may be more feasible to implement.
{"title":"Less is more. Exploring opportunities and challenges of digital crowdsourcing for political parties.","authors":"Francseco Nasi","doi":"10.12688/openreseurope.22121.1","DOIUrl":"10.12688/openreseurope.22121.1","url":null,"abstract":"<p><strong>Background: </strong>Political parties across liberal democracies face a persistent crisis of legitimacy, representation, and membership. In response, scholars and practitioners proposed a range of deliberative reforms aimed at making parties more internally democratic. Yet such innovations have proven difficult to implement due to structural features inherent to political parties, including hierarchical organization and electoral imperatives. Similarly, digital platforms promised to revolutionize internal democracy but largely disappointed expectations. This impasse highlights the need for lighter forms of democratic engagement that may better align with the operational realities of parties. Among these alternatives, digital crowdsourcing emerges as a possible path forward. Digital crowdsourcing refers to processes in which organizations use technology to tap into people's distributed knowledge, combining bottom-up input with top-down coordination to solve problems, carry out tasks, or generate ideas.</p><p><strong>Methods: </strong>This theoretical paper develops an analytical framework tailored to the organizational and democratic specificities of political parties. I propose a typology of digital crowdsourcing for parties based on two dimensions (policy impact and power structure) yielding four forms: vertical, performative, expressive, and democratic crowdsourcing.</p><p><strong>Results: </strong>Thanks to this typology, I identify three core opportunities associated with the adoption of these tools: enhanced democratic participation, increased flexibility, and improved accessibility for members and supporters. Conversely, I outline three central challenges: tensions between inclusion and exclusion, risks of elite capture, and conflicts between competing sources of democratic legitimacy. Finally, I present a set of strategies for achieving a feasible democratic crowdsourcing in political parties.</p><p><strong>Conclusion: </strong>Integrating digital democratic innovations into political parties (especially long-established ones) remains particularly challenging. However, lighter forms of participation, such as digital crowdsourcing, may be more feasible to implement.</p>","PeriodicalId":74359,"journal":{"name":"Open research Europe","volume":"5 ","pages":"397"},"PeriodicalIF":0.0,"publicationDate":"2025-12-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12873537/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146144792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-21eCollection Date: 2025-01-01DOI: 10.12688/openreseurope.19510.3
Antti Joonas Koivisto, Michael Jayjock
Background: The European Chemicals Agency (ECHA) has been established to act as an independent body in the context of the implementation of the Regulation on the registration, evaluation, authorisation and restriction of chemicals (REACH) (Regulation (EC) 1907/2006). Quantitative exposure estimates are required for all exposure scenarios where hazardous emissions occur using exposure measurements or exposure models. REACH regulation specifies that exposure models need to be appropriate and quantitative. Here, we evaluated the criteria for regulatory exposure models by ECHA.
Methods: The evaluation was performed by asking ECHA the criteria for exposure models.
Results: ECHA does not specify any quality criteria for regulatory exposure models or have transparency requirements. Without quality criteria and transparency, there cannot be quality assurance or control. Thus, an appropriate model cannot be defined. ECHA does not recognize the quantitative term even though the fundamental requirement for quantitative exposure assessment is quantitative uncertainty assessment.
Conclusions: As a result of these shortcomings, ECHA R.14 Guidance for occupational exposure assessment allows the use of non-physical models containing qualitative parameters based on non-accessible calibration databases and statistical evaluations. Because of the lack of transparency, non-physical model construct, and subjective input parameters, model results cannot be associated with real-world operational conditions, and quantitative uncertainty assessment is not feasible. This makes the models qualitative by definition and is not applicable to regulatory exposure modelling. This raises questions about whether ECHA has followed its regulatory mandates in implementing the REACH legislation.
{"title":"How credible is REACH regulation without transparency, quality criteria, assurance, and control?","authors":"Antti Joonas Koivisto, Michael Jayjock","doi":"10.12688/openreseurope.19510.3","DOIUrl":"10.12688/openreseurope.19510.3","url":null,"abstract":"<p><strong>Background: </strong>The European Chemicals Agency (ECHA) has been established to act as an independent body in the context of the implementation of the Regulation on the registration, evaluation, authorisation and restriction of chemicals (REACH) (Regulation (EC) 1907/2006). Quantitative exposure estimates are required for all exposure scenarios where hazardous emissions occur using exposure measurements or exposure models. REACH regulation specifies that exposure models need to be <i>appropriate</i> and <i>quantitative</i>. Here, we evaluated the criteria for regulatory exposure models by ECHA.</p><p><strong>Methods: </strong>The evaluation was performed by asking ECHA the criteria for exposure models.</p><p><strong>Results: </strong>ECHA does not specify any quality criteria for regulatory exposure models or have transparency requirements. Without quality criteria and transparency, there cannot be quality assurance or control. Thus, an <i>appropriate</i> model cannot be defined. ECHA does not recognize the <i>quantitative</i> term even though the fundamental requirement for quantitative exposure assessment is quantitative uncertainty assessment.</p><p><strong>Conclusions: </strong>As a result of these shortcomings, ECHA R.14 Guidance for occupational exposure assessment allows the use of non-physical models containing qualitative parameters based on non-accessible calibration databases and statistical evaluations. Because of the lack of transparency, non-physical model construct, and subjective input parameters, model results cannot be associated with real-world operational conditions, and quantitative uncertainty assessment is not feasible. This makes the models qualitative by definition and is not applicable to regulatory exposure modelling. This raises questions about whether ECHA has followed its regulatory mandates in implementing the REACH legislation.</p>","PeriodicalId":74359,"journal":{"name":"Open research Europe","volume":"5 ","pages":"100"},"PeriodicalIF":0.0,"publicationDate":"2025-12-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12790599/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145960899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16eCollection Date: 2025-01-01DOI: 10.12688/openreseurope.21115.2
Jorge Bronet, Rosa Borge
Background: Previous studies on the internal democracy of new digital parties in Western Europe suggest a plebiscitary tendency, but most focus on a limited number of cases. This paper aims to empirically analyze the intra-party democracy of electorally successful new parties in Western Europe and identify the main factors that may influence it.
Methods: Drawing on data from the second round of the Political Parties Database (PPDB) and the first wave of the Populism and Political Parties Expert Survey (POPPA), this study covers more than 100 parties across 13 countries. Adopting a generational approach, we define a cohort of "crisis parties"-founded between the economic crisis and the pandemic-and examine their internal democracy in comparison to older parties, using Von dem Berge and Poguntke's IPD model and Böhmelt et al.,'s (2022) framework, with ideology, digitalization, and populism treated as explanatory variables.
Results: Our findings show that being a crisis party-even a highly digitalized one on the left-does not entail more plebiscitary forms of intra-party democracy.
Conclusions: Digitalization emerges as the most consistent predictor shaping intra-party democracy, while the cohort effect matters only insofar as crisis parties are more populist than older parties, which ultimately reduces their internal democracy.
背景:以往对西欧新兴数字政党内部民主的研究显示出公民投票倾向,但大多集中在有限的案例上。本文旨在对西欧当选成功的新兴政党的党内民主进行实证分析,并找出可能影响党内民主的主要因素。方法:利用第二轮政党数据库(PPDB)和第一波民粹主义和政党专家调查(POPPA)的数据,本研究涵盖了13个国家的100多个政党。采用代际方法,我们定义了一组在经济危机和大流行之间成立的“危机政党”,并使用Von dem Berge和Poguntke的IPD模型和Böhmelt等人(2022)的框架,将意识形态、数字化和民粹主义作为解释变量,与旧政党相比,研究了它们的内部民主。结果:我们的研究结果表明,作为一个危机政党——即使是一个高度数字化的左翼政党——并不需要更多的公民投票形式的党内民主。结论:数字化成为塑造党内民主的最一致的预测因素,而队列效应仅在危机政党比旧政党更民粹主义的情况下起作用,这最终会降低其内部民主。
{"title":"The internal democracy of the crisis parties in Western Europe: a quantitative analysis of the role of digitalization, ideology and populism.","authors":"Jorge Bronet, Rosa Borge","doi":"10.12688/openreseurope.21115.2","DOIUrl":"10.12688/openreseurope.21115.2","url":null,"abstract":"<p><strong>Background: </strong>Previous studies on the internal democracy of new digital parties in Western Europe suggest a plebiscitary tendency, but most focus on a limited number of cases. This paper aims to empirically analyze the intra-party democracy of electorally successful new parties in Western Europe and identify the main factors that may influence it.</p><p><strong>Methods: </strong>Drawing on data from the second round of the Political Parties Database (PPDB) and the first wave of the Populism and Political Parties Expert Survey (POPPA), this study covers more than 100 parties across 13 countries. Adopting a generational approach, we define a cohort of \"crisis parties\"-founded between the economic crisis and the pandemic-and examine their internal democracy in comparison to older parties, using Von dem Berge and Poguntke's IPD model and Böhmelt <i>et al.</i>,'s (2022) framework, with ideology, digitalization, and populism treated as explanatory variables.</p><p><strong>Results: </strong>Our findings show that being a crisis party-even a highly digitalized one on the left-does not entail more plebiscitary forms of intra-party democracy.</p><p><strong>Conclusions: </strong>Digitalization emerges as the most consistent predictor shaping intra-party democracy, while the cohort effect matters only insofar as crisis parties are more populist than older parties, which ultimately reduces their internal democracy.</p>","PeriodicalId":74359,"journal":{"name":"Open research Europe","volume":"5 ","pages":"288"},"PeriodicalIF":0.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12576320/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145433270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-16eCollection Date: 2025-01-01DOI: 10.12688/openreseurope.20467.2
Clément Cherblanc, Jeppe Peder Grejs Petersen, Fredrick Bunt, José Abraham Torres-Alavez, Ruth Mottram
The output from Regional Climate Models (RCMs) can be difficult for non-specialists to handle, especially in cases where metadata describing coordinate systems is incomplete or absent. Standard geospatial analysis tools expect coordinate reference systems to be encoded inside file metadata. In addition to different metadata conventions, RCMs that are run over limited domains in the Arctic and Antarctic frequently have rotated longitude and latitude grids that add additional complexity compared to geographic datasets. In this article, we describe two post-processing methods that make RCM outputs easier to use for applications in the climate and related sciences. We demonstrate two different approaches that allow output from RCMs to be 1) read on the correct grid without interpolating or reprojecting the dataset, or 2) resampled onto a regular grid that includes geographic coordinates. These two approaches use the widely available and free software tools Python and Climate Data Operators (CDO). These transformations make outputs simple to use in Geographic Information Systems (GIS) and allow the full use of Python libraries, such as xarray, for plotting and analysis.
区域气候模式(RCMs)的输出对于非专业人员来说可能很难处理,特别是在描述坐标系统的元数据不完整或缺失的情况下。标准地理空间分析工具期望在文件元数据中编码坐标参考系。除了不同的元数据约定之外,在北极和南极的有限域上运行的rcm经常有旋转的经纬度网格,与地理数据集相比,这增加了额外的复杂性。在本文中,我们描述了两种后处理方法,使RCM输出更容易用于气候和相关科学的应用。我们展示了两种不同的方法,允许rcm的输出1)在正确的网格上读取,而不需要插值或重新投影数据集,或者2)重新采样到包含地理坐标的规则网格上。这两种方法使用了广泛可用的免费软件工具Python和Climate Data Operators (CDO)。这些转换使输出在地理信息系统(GIS)中易于使用,并允许充分使用Python库(如xarray)进行绘图和分析。
{"title":"On solving coordinate problems in climate model output and other geospatial datasets.","authors":"Clément Cherblanc, Jeppe Peder Grejs Petersen, Fredrick Bunt, José Abraham Torres-Alavez, Ruth Mottram","doi":"10.12688/openreseurope.20467.2","DOIUrl":"10.12688/openreseurope.20467.2","url":null,"abstract":"<p><p>The output from Regional Climate Models (RCMs) can be difficult for non-specialists to handle, especially in cases where metadata describing coordinate systems is incomplete or absent. Standard geospatial analysis tools expect coordinate reference systems to be encoded inside file metadata. In addition to different metadata conventions, RCMs that are run over limited domains in the Arctic and Antarctic frequently have rotated longitude and latitude grids that add additional complexity compared to geographic datasets. In this article, we describe two post-processing methods that make RCM outputs easier to use for applications in the climate and related sciences. We demonstrate two different approaches that allow output from RCMs to be 1) read on the correct grid without interpolating or reprojecting the dataset, or 2) resampled onto a regular grid that includes geographic coordinates. These two approaches use the widely available and free software tools Python and Climate Data Operators (CDO). These transformations make outputs simple to use in Geographic Information Systems (GIS) and allow the full use of Python libraries, such as xarray, for plotting and analysis.</p>","PeriodicalId":74359,"journal":{"name":"Open research Europe","volume":"5 ","pages":"269"},"PeriodicalIF":0.0,"publicationDate":"2025-12-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12770884/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145919416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}