The growing integration of AI into educational and professional settings raises urgent questions about how human creativity evolves when intelligent systems guide, constrain, or accelerate the design process. Generative AI offers structured suggestions and rapid access to ideas, but its role in adopting genuine innovation remains contested. This paper investigates the dynamics of human-AI collaboration in challenge-based design experiments, applying established creativity metrics: fluency, flexibility, originality, and elaboration in order to evaluate outcomes and implications in an engineering education context. Through an exploratory quasi-experimental study, a comparison of AI-assisted and human-only teams was conducted across four dimensions of creative performance: quantity, variety, uniqueness, and quality of design solutions. Findings point to a layered outcome: although AI accelerated idea generation, it also encouraged premature convergence, narrowed exploration, and compromised functional refinement. Human-only teams engaged in more iterative experimentation and produced designs of higher functional quality and greater ideational diversity. Participants' self-perceptions of creativity remained stable across both conditions, highlighting the risk of cognitive offloading, where reliance on AI may reduce genuine creative engagement while masking deficits through inflated confidence. Importantly, cognitive offloading is not directly measured in this study; rather, it is introduced here as a theoretically grounded interpretive explanation that helps contextualize the observed disconnect between performance outcomes and self-perceived creativity. These results bring opportunities and risks. On the one hand, AI can support ideation and broaden access to concepts; on the other, overreliance risks weakening iterative learning and the development of durable creative capacities. The ethical implications are significant, raising questions about accountability and educational integrity when outcomes emerge from human-AI co-creation. The study argues for process-aware and ethically grounded frameworks that balance augmentation with human agency, supporting exploration without eroding the foundations of creative problem-solving. The study consolidates empirical findings with conceptual analysis, advancing the discussion on when and how AI should guide the creative process and providing insights for the broader debate on the future of human-AI collaboration.
Rationale and objectives: To address the challenges in detecting anterior cruciate ligament (ACL) lesions in knee MRI examinations, including difficulties in identifying tiny lesions, insufficient extraction of low-contrast features, and poor modeling of irregular lesion morphologies, and to provide a precise and efficient auxiliary diagnostic tool for clinical practice.
Materials and methods: An enhanced framework based on YOLOv10 is constructed. The backbone network is optimized using the C2f-SimAM module to enhance multi-scale feature extraction and spatial attention; an Adaptive Spatial Fusion (ASF) module is introduced in the neck to better fuse multi-scale spatial features; and a novel hybrid loss function combining Focal-EIoU and KPT Loss is employed. To ensure rigorous statistical evaluation, we utilized a five-fold cross-validation strategy on a dataset of 917 cases.
Results: Evaluation on the KneeMRI dataset demonstrates that the proposed model achieves statistically significant improvements over standard YOLOv10, Faster R-CNN, and Transformer-based detectors (RT-DETR). Specifically, mAP@0.5 is increased by 1.3% (p < 0.05) compared to the standard YOLOv10, and mAP@0.5:0.95 is improved by 2.5%. Qualitative analysis further confirms the model's ability to reduce false negatives in small, low-contrast tears.
Conclusion: This framework effectively connects general object detection models with the specific requirements of medical imaging, providing a precise and efficient solution for diagnosing ACL injuries in routine clinical workflows.
Introduction: The business model of multi-sided digital labor platforms relies on maintaining a balance between workers and customers or clients to sustain operations. These platforms initially leveraged venture capital to attract workers by providing them with incentives and the promise of flexibility, creating lock-in effects to consolidate their market power and enable monopolistic practices. As platforms mature, they increasingly implement algorithmic management and control mechanisms, such as rating systems, which restrict worker autonomy, access to work and flexibility. Despite limited bargaining power, workers have developed both individual and collective strategies to counteract these algorithmic restrictions.
Methods: This article employs a structured synthesis, drawing on existing academic literature as well as surveys conducted by the International Labour Office (ILO) between 2017 and 2023, to examine how platform workers utilize a combination of informal and formal forms of resistance to build resilience against algorithmic disruptions.
Results: The analysis covers different sectors (freelance and microtask work, taxi and delivery services, and domestic work and beauty care platforms) offering insights into the changing dynamics of worker agency on platforms, which have enabled resilience-building among workers on digital labour platforms. In the face of significant barriers to carrying out formal acts of resistance, workers on digital labour platforms often turn to informal acts of resistance, often mediated by social media, to adapt to changes in the platforms' algorithms and maintain their well-being.
Discussion: Platform workers increasingly have a diverse array of tools to exercise their agency physically and virtually. However, the process of establishing resilience in such conditions is often not straight-forward. As platforms counteract workers' acts of resistance, workers must continue to develop new and innovative strategies to strengthen their resilience. Such a complex and nuanced landscape merits continued research and analysis.
Introduction: Heart diseases (CVDs) are a major cause of morbidity and mortality in all global regions and thus there is the pressing need to develop early detection and effective management approaches. Traditional cardiovascular monitoring systems do not necessarily have real-time analyzing solutions and individual understanding, which leads to delayed interventions. Moreover, one of the greatest issues in digital healthcare applications remains to be data privacy and security.
Methods: The proposed research is to present a developed model of CVD detection that will combine Internet of Things (IoT)-based wearable devices, electronic clinical records, and access control using blockchain. The system starts by registering patients and medical personnel and then proceeds with collecting physiological as well as clinical data. Kalman filtering helps in improving data reliability in the pre-processing stage. Shallow and deep feature extraction methods are used to describe complicated patterns of data. A Refracted Sand Cat Swarm Optimization (SCSO) algorithm is used as part of feature maximization. A new TriBoostCardio Ensemble model (CatBoost, AdaBoost, and LogitBoost) is used to conduct the classification task and enhance the predictive accuracy. Smart contracts provide safe and transparent access to health information.
Results: There are experimental results that the proposed framework enhances high predictive accuracy and detecting cardiovascular diseases earlier than traditional ones. The combination between SCSO feature selection and the TriBoostCardio Ensemble model improves the sturdiness of the model and precision of classification.
Discussion: Besides the fact that the presented framework promotes the accuracy and timeliness of CVD detection, it also way to deal with important problems related to the data privacy and integrity with the help of blockchain-based access control. This solution offers a stable and trustworthy solution to the current healthcare systems with the combination of the smart optimization of features, ensemble learning, and secure data management.
Background: Accurate drug dosing in pediatrics is complicated by age-related physiological variability. Standard weight-based dosing may result in either subtherapeutic exposure or toxicity. Machine learning (ML) models can capture complex relationships among clinical variables and support individualized therapy.
Methods: We analyzed clinical and pharmacokinetic data from 20 pediatric patients enrolled in the PUERI study (January 2020-November 2021, ASST Grande Ospedale Metropolitano Niguarda, Milan, Italy). Eight ML models-including linear regression (LR), ridge regression (RR), lasso regression (LaR), Huber regression (HR), random forest (RF), XGBoost, LightGBM, and a neural network (MLP)-were trained to predict ceftaroline doses that would achieve plasma concentrations close to the therapeutic target of 10 mg/L. Model performance was evaluated using mean absolute error (MAE), root mean squared error (RMSE), and the coefficient of determination (R2). To ensure interpretability, we applied local interpretable model-agnostic explanations (LIME) to identify the most influential predictors of dose.
Results: MLP (MAE 1.53 mg, R2 0.94) and XGBoost (MAE 2.04 mg, R2 0.89) outperformed linear models. Predicted doses were more frequently aligned with therapeutic concentrations than those clinically administered. Model-based simulated concentrations fell within the therapeutic range in approximately 85% of cases, and the best ML models showed over 90% patient-level clinical. RF, LightGBM and XGBoost achieved the highest clinical alignment, with 94.2, 92.4 and 91.5% of patients reaching therapeutic levels. Renal function markers, such as serum creatinine and azotemia, together with anthropometric parameters including weight, height, and body mass index, were consistently the most influential features.
Conclusion: Advanced ML models can optimize ceftaroline dosing in pediatric patients and outperform traditional dosing strategies. Combining predictive accuracy with interpretability (via LIME) supports clinical trust and may enhance precision antibiotic therapy while reducing the risks of antimicrobial resistance and toxicity.
Introduction: Technostress is an essential factor in predicting middle school teachers' willingness to adopt artificial intelligence (AI) in future educational practices and their actual use of such technologies. This study examines technostress among middle school teachers in the context of AI integration and explores how personal competence (including digital awareness, digital technology knowledge and skills, and digital application competence), role conflict, organizational support, and technological features influence technostress.
Methods: The Technology Acceptance Model (TAM) is employed as the theoretical underpinning for the present research, using survey data from 301 middle school teachers, a path model was constructed to analyze these relationships.
Results: The results indicate that the overall level of technostress is relatively low; however, different teacher groups experience distinct sources of stress. Specifically, appropriate technological features and strong digital awareness effectively alleviate technostress, while role conflict intensifies it. Furthermore, these factors play a significant mediating role between organizational support and technostress.
Discussion: Based on these findings, the study proposes several strategies to mitigate technostress among middle school teachers. First, a tiered and category based approach should be adopted to provide targeted support according to teachers' actual needs. Second, it is important to balance the relationship between technological supply and educational demand to ensure sustainable implementation. Third, showcasing typical successful cases can help enhance teachers' digital awareness and confidence in using AI. Finally, strengthen role positioning and work flexibility to ease teachers' role conflict. These strategies offer practical guidance for educational administrators seeking to promote the effective integration of AI technologies in middle school education.
Causal reasoning is essential for understanding relationships and guiding decision-making in different applications, as it allows for the identification of cause-and-effect relationships between variables. By uncovering the underlying process that drives these relationships, causal reasoning enables more accurate predictions, controlled interventions, and the ability to distinguish genuine causal effects from mere correlations in complex systems. In oil field management, where interactions between injector and producer wells are inherently dynamic, it is vital to uncover causal connections to optimize recovery and minimize waste. Since controlled experiments are impractical in this setting, we must rely solely on observed data. In this paper, we develop an innovative causality-inspired framework that leverages domain expertise for causal feature learning for robust connectivity estimation. We address the challenge posed by confounding factors, latency in system responses, and the complexity of inter-well interactions that complicate causal analysis. First, we frame the problem through a causal lens and propose a novel framework that generates pairwise features driven by causal theory. This method captures meaningful representations of relationships within the oil field system. By constructing independent pairwise feature representations, our method implicitly accounts for confounder signal and enhances the reliability of connectivity estimation. Furthermore, our approach requires only limited context data to train machine learning models that estimate the connectivity probability between injectors and producers. We first validate our methodology through experiments on synthetic and semi-synthetic datasets, ensuring its robustness across varied scenarios. We then apply it to the complex Brazilian Pre-Salt oil fields using public synthetic and real-world data. Our results show that the proposed method effectively identifies injector-producer connectivity while maintaining rapid training times. This enables scalability and provides an interpretable approach for complex dynamic systems through causal theory. While previous projects have employed causal methods in the oil field context, to the best of our knowledge, this is the first time to systematically formulate the problem using causal reasoning that explicitly accounts for relevant confounders and develops an approach that effectively addresses these challenges and facilitates the discovery of interwell connections within an oil field.
Camouflaged object detection (COD) aims to identify objects that are visually indistinguishable from their surrounding background, making it challenging to precisely distinguish the boundaries between objects and backgrounds in camouflaged environments. In recent years, numerous studies have leveraged frequency-domain methods to aid in camouflage target detection by utilizing frequency-domain information. However, current methods based on the frequency domain cannot effectively capture the boundary information between disguised objects and the background. To address this limitation, we propose a Laplace transform-guided camouflage object detection network called the Self-Correlation Cross Relation Network (SeCoCR). In this framework, the Laplace-transformed camouflage target is treated as high-frequency information, while the original image serves as low-frequency information. These are then separately input into our proposed Self-Relation Attention module to extract both local and global features. Within the Self-Relation Attention module, key semantic information is retained in the low-frequency data, and crucial boundary information is preserved in the high-frequency data. Furthermore, we design a multi-scale attention mechanism for low- and high-frequency information, Low-High Mix Fusion, to effectively integrate essential information from both frequencies for camouflage object detection. Comprehensive experiments on three COD benchmark datasets demonstrate that our approach significantly surpasses existing state-of-the-art frequency-domain-assisted methods.
Introduction: Privacy has become a significant concern in the digital world, especially concerning the personal data collected by websites and other service providers on the World Wide Web network. One of the significant approaches to enable the individual to control privacy is the privacy policy document, which contains vital information on this matter. Publishing a privacy policy is required by regulation in most Western countries. However, the privacy policy document is a natural free text-based object, usually phrased in a legal language, and rapidly changes, making it consequently relatively hard to understand and almost always neglected by humans.
Methods: This research proposes a novel methodology to receive an unstructured privacy policy text and automatically structure it into predefined parameters. The methodology is based on a two-layer artificial intelligence (AI) process.
Results: In an empirical study that included 49 actual privacy policies from different websites, we demonstrated an average F1-score > 0.8 where five of six parameters achieved a very high classification accuracy.
Discussion: This methodology can serve both humans and AI agents by addressing issues such as cognitive burden, non-standard formalizations, cognitive laziness, and the dynamics of the document across a timeline, which deters the use of the privacy policy as a resource. The study addresses a critical gap between the present regulations, aiming at enhancing privacy, and the abilities of humans to benefit from the mandatory published privacy policy.

