Distributed energy systems increasingly consist of heterogeneous assets and organizations that must exchange operational data while preserving inter-operability, security, and regulatory compliance. Existing integration solutions often rely on syntactic adapters or centralized data hubs, which scale poorly and offer limited transparency or governance. This paper presents a metadata-driven federated monitoring architecture that integrates ontology-based metadata federation, event-driven microservices, and governance-aware provenance tracking to enable secure, scalable, and auditable data sharing across distributed energy infrastructures. The proposed system models all assets and data streams through a unified semantic graph, aligning heterogeneous schemas via automated ontology matching and combined lexical-structural similarity scoring. A microservices pipeline ingests multi-protocol data (OPC-UA, MQTT, REST), applies stream analytics for anomaly detection, and enforces access and compliance policies at the metadata layer. A Web-based interface allows operators to issue GraphQL queries, visualize distributed assets, and monitor real-time alerts linked to provenance records. A prototype implementation demonstrates operational-scale efficiency, achieving low-latency response ($leq 540$ ms for hybrid metadata-telemetry queries over 10,000 assets), near-linear scalability (~4.5% CPU growth per added node), and high governance accuracy (precision 0.90, recall 0.95, median detection 1.6 s) while maintaining minimal overhead (<8% added latency). These results highlight that the proposed metadata-driven federation delivers both technical performance and governance reliability unmatched by existing Web-based integration frameworks. These results show that metadata federation can be deployed at operational scale while providing explainable compliance and trustworthy data sharing across organizational boundaries. This research advances the state of the art in Web-based system engineering by combining semantic modeling, distributed processing, and security governance into a single deployable framework. Beyond energy systems, the approach offers a foundation for interoperable and auditable monitoring in other critical cyber-physical domains such as industrial IoT, urban infrastructure, and healthcare telemetry.
分布式能源系统越来越多地由异构资产和组织组成,这些资产和组织必须交换操作数据,同时保持互操作性、安全性和法规遵从性。现有的集成解决方案通常依赖于语法适配器或集中式数据中心,它们的可伸缩性很差,并且提供的透明度或治理有限。本文提出了一种元数据驱动的联合监控架构,该架构集成了基于本体的元数据联合、事件驱动的微服务和治理感知的来源跟踪,以实现跨分布式能源基础设施的安全、可扩展和可审计的数据共享。该系统通过统一的语义图对所有资产和数据流建模,通过自动本体匹配和组合词汇结构相似度评分来对齐异构模式。微服务管道摄取多协议数据(OPC-UA、MQTT、REST),应用流分析进行异常检测,并在元数据层执行访问和遵从策略。基于web的界面允许操作人员发出GraphQL查询,可视化分布式资产,并监控与来源记录相关的实时警报。原型实现演示了操作规模的效率,实现了低延迟响应($leq 540$毫秒,用于超过10,000个资产的混合元数据遥测查询),近线性可扩展性(4.5% CPU growth per added node), and high governance accuracy (precision 0.90, recall 0.95, median detection 1.6 s) while maintaining minimal overhead (<8% added latency). These results highlight that the proposed metadata-driven federation delivers both technical performance and governance reliability unmatched by existing Web-based integration frameworks. These results show that metadata federation can be deployed at operational scale while providing explainable compliance and trustworthy data sharing across organizational boundaries. This research advances the state of the art in Web-based system engineering by combining semantic modeling, distributed processing, and security governance into a single deployable framework. Beyond energy systems, the approach offers a foundation for interoperable and auditable monitoring in other critical cyber-physical domains such as industrial IoT, urban infrastructure, and healthcare telemetry.
{"title":"A Metadata-Driven Architecture for Federated Data Asset Management and Visualization in Energy Monitoring Networks","authors":"Qing Rao;Jianxia Wu;Shihong Chen;Zhongkai Pan;Qing Lei;Yinfeng Liu;Yangjinglan Feng;Xianping Jia","doi":"10.13052/jwe1540-9589.2522","DOIUrl":"https://doi.org/10.13052/jwe1540-9589.2522","url":null,"abstract":"Distributed energy systems increasingly consist of heterogeneous assets and organizations that must exchange operational data while preserving inter-operability, security, and regulatory compliance. Existing integration solutions often rely on syntactic adapters or centralized data hubs, which scale poorly and offer limited transparency or governance. This paper presents a metadata-driven federated monitoring architecture that integrates ontology-based metadata federation, event-driven microservices, and governance-aware provenance tracking to enable secure, scalable, and auditable data sharing across distributed energy infrastructures. The proposed system models all assets and data streams through a unified semantic graph, aligning heterogeneous schemas via automated ontology matching and combined lexical-structural similarity scoring. A microservices pipeline ingests multi-protocol data (OPC-UA, MQTT, REST), applies stream analytics for anomaly detection, and enforces access and compliance policies at the metadata layer. A Web-based interface allows operators to issue GraphQL queries, visualize distributed assets, and monitor real-time alerts linked to provenance records. A prototype implementation demonstrates operational-scale efficiency, achieving low-latency response (<tex>$leq 540$</tex> ms for hybrid metadata-telemetry queries over 10,000 assets), near-linear scalability (~4.5% CPU growth per added node), and high governance accuracy (precision 0.90, recall 0.95, median detection 1.6 s) while maintaining minimal overhead (<8% added latency). These results highlight that the proposed metadata-driven federation delivers both technical performance and governance reliability unmatched by existing Web-based integration frameworks. These results show that metadata federation can be deployed at operational scale while providing explainable compliance and trustworthy data sharing across organizational boundaries. This research advances the state of the art in Web-based system engineering by combining semantic modeling, distributed processing, and security governance into a single deployable framework. Beyond energy systems, the approach offers a foundation for interoperable and auditable monitoring in other critical cyber-physical domains such as industrial IoT, urban infrastructure, and healthcare telemetry.","PeriodicalId":49952,"journal":{"name":"Journal of Web Engineering","volume":"25 2","pages":"153-186"},"PeriodicalIF":1.0,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11424362","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-06DOI: 10.13052/jwe1540-9589.2525
Qing Rao;Yunhao Yu;Yizhou Fu;Boda Zhang;Shihong Chen;Jianxia Wu;Zhongkai Pan;Qing Lei
Ensuring data-flow integrity and rapid threat containment in renewable-integrated, distributed energy systems requires monitoring solutions that are technically rigorous yet lightweight in operation. This paper presents a service-oriented web framework for real-time data-flow tracing and threat propagation analysis in heterogeneous industrial control and energy networks. The framework integrates lightweight provenance tokens embedded in event streams, an incrementally maintained lineage graph with probability-weighted edges, and propagation-aware risk indicators that drive adaptive response orchestration through open web APIs. A progressive web dashboard provides sub-second visualization of dynamic topologies, risk heat maps, and operator controls. Implemented on a Kafka/Flink streaming backbone with a graph database and deployed in an eight-node Kubernetes testbed emulating substations, gateways, and adversarial nodes using OPC UA, MQTT, and REST, the system achieved tracing coverage of $0.96pm 0.02$ and fidelity of $0.92 pm 0.03$, with forward propagation prediction reaching precision 0.91 and recall 0.88, outperforming static-topology baselines. Adaptive containment reduced the flow reproduction factor from 1.42 to 0.64, achieved a median containment efficacy of 0.71, and stabilized risk trajectories within two minutes, while operational cost remained low with payload expansion under 12%, CPU overhead below 4%, and service availability above 0.99 for critical assets. User studies showed 38% faster incident response and higher comprehension and confidence compared with static log viewers. These results demonstrate that modern web-engineering practices such as microservices, event-driven streaming, and progressive web interfaces can enable practical, real-time cyber defense for distributed energy infrastructures by bridging static security guidelines with deployable, adaptive situational awareness and containment.
{"title":"Service-Oriented Web Framework for Real-time Data Flow Tracing and Threat Propagation Analysis in Distributed Energy Systems","authors":"Qing Rao;Yunhao Yu;Yizhou Fu;Boda Zhang;Shihong Chen;Jianxia Wu;Zhongkai Pan;Qing Lei","doi":"10.13052/jwe1540-9589.2525","DOIUrl":"https://doi.org/10.13052/jwe1540-9589.2525","url":null,"abstract":"Ensuring data-flow integrity and rapid threat containment in renewable-integrated, distributed energy systems requires monitoring solutions that are technically rigorous yet lightweight in operation. This paper presents a service-oriented web framework for real-time data-flow tracing and threat propagation analysis in heterogeneous industrial control and energy networks. The framework integrates lightweight provenance tokens embedded in event streams, an incrementally maintained lineage graph with probability-weighted edges, and propagation-aware risk indicators that drive adaptive response orchestration through open web APIs. A progressive web dashboard provides sub-second visualization of dynamic topologies, risk heat maps, and operator controls. Implemented on a Kafka/Flink streaming backbone with a graph database and deployed in an eight-node Kubernetes testbed emulating substations, gateways, and adversarial nodes using OPC UA, MQTT, and REST, the system achieved tracing coverage of <tex>$0.96pm 0.02$</tex> and fidelity of <tex>$0.92 pm 0.03$</tex>, with forward propagation prediction reaching precision 0.91 and recall 0.88, outperforming static-topology baselines. Adaptive containment reduced the flow reproduction factor from 1.42 to 0.64, achieved a median containment efficacy of 0.71, and stabilized risk trajectories within two minutes, while operational cost remained low with payload expansion under 12%, CPU overhead below 4%, and service availability above 0.99 for critical assets. User studies showed 38% faster incident response and higher comprehension and confidence compared with static log viewers. These results demonstrate that modern web-engineering practices such as microservices, event-driven streaming, and progressive web interfaces can enable practical, real-time cyber defense for distributed energy infrastructures by bridging static security guidelines with deployable, adaptive situational awareness and containment.","PeriodicalId":49952,"journal":{"name":"Journal of Web Engineering","volume":"25 2","pages":"249-282"},"PeriodicalIF":1.0,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11424361","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-06DOI: 10.13052/jwe1540-9589.2526
Jose Garcia-Alonso;Majid Haghparast;Tommi Mikkonen;Juan Manuel Murillo Rodríguez;Vlad Stirbu
Quantum software engineering has gained a lot of attention recently. Multiple traditional software engineering events have introduced a quantum software track, or a co-located quantum related workshop or other side event, indicating that quantum software is becoming a popular research topic, with more and more software engineering researchers contributing to its evolution. In this paper, we address software engineering research that aims at solving problems that emerge when quantum programs are used on industry domains. The paper is based on the keynote at the IEEE Symposium on Quantum Software: Quantum Software Engineering 2025, which took place in Helsinki, Finland, Summer of 2025. In particular, we address the state of research in quantum software engineering, its novel aspects as well as its connections to other branches of software engineering. Furthermore, in the light of this research, we also assess the maturity of quantum software engineering in the light of industry expectations.
{"title":"Quantum Software Engineering: Something Old, Something New; Something Borrowed, Something Blue","authors":"Jose Garcia-Alonso;Majid Haghparast;Tommi Mikkonen;Juan Manuel Murillo Rodríguez;Vlad Stirbu","doi":"10.13052/jwe1540-9589.2526","DOIUrl":"https://doi.org/10.13052/jwe1540-9589.2526","url":null,"abstract":"Quantum software engineering has gained a lot of attention recently. Multiple traditional software engineering events have introduced a quantum software track, or a co-located quantum related workshop or other side event, indicating that quantum software is becoming a popular research topic, with more and more software engineering researchers contributing to its evolution. In this paper, we address software engineering research that aims at solving problems that emerge when quantum programs are used on industry domains. The paper is based on the keynote at the IEEE Symposium on Quantum Software: Quantum Software Engineering 2025, which took place in Helsinki, Finland, Summer of 2025. In particular, we address the state of research in quantum software engineering, its novel aspects as well as its connections to other branches of software engineering. Furthermore, in the light of this research, we also assess the maturity of quantum software engineering in the light of industry expectations.","PeriodicalId":49952,"journal":{"name":"Journal of Web Engineering","volume":"25 2","pages":"283-298"},"PeriodicalIF":1.0,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11424365","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-06DOI: 10.13052/jwe1540-9589.2524
Zhao Na;Mao Yanying
Web service traffic forecasting is vital for dynamic resource scaling, load balancing, and anomaly detection, but remains challenging due to frequent large-scale fluctuations caused by heterogeneous user behaviors. Traditional time-series models and recent deep neural networks have made progress by capturing temporal patterns, yet they largely overlook latent causal relationships between services that can significantly influence traffic dynamics. In this paper, we propose a novel causal cross-embedded spatio-temporal LSTM (CEST-LSTM) architecture that integrates spatio-temporal modelling with a causal inference mechanism to improve web traffic prediction. The model consists of a spatio-temporal LSTM branch for capturing temporal dependencies across services and a causal branch that leverages convergent cross mapping-based cross-embedding to uncover and incorporate latent inter-service causal influences. A cross-embedding fusion mechanism seamlessly combines these causal features with spatio-temporal representations. On real-world datasets (e.g., Microsoft Azure and Alibaba Cloud), CEST-LSTM achieves a variance-explained prediction accuracy of approximately 93%, surpassing state-of-the-art baselines such as temporal graph convolutional networks (T-GCN) and spatio-temporal attention GCNs (STA-GCN). Comparative experiments and ablation studies confirm that the causal branch consistently improves forecasting accuracy - for example, removing the causal module reduces accuracy by several percentage points. These results demonstrate that integrating latent causal relationship modelling into spatio-temporal neural networks yields substantial improvements in web traffic prediction, offering a promising direction for robust and interpretable forecasting in complex web systems.
{"title":"Causal Cross-Embedded Spatio-Temporal LSTM for Web Traffic Prediction","authors":"Zhao Na;Mao Yanying","doi":"10.13052/jwe1540-9589.2524","DOIUrl":"https://doi.org/10.13052/jwe1540-9589.2524","url":null,"abstract":"Web service traffic forecasting is vital for dynamic resource scaling, load balancing, and anomaly detection, but remains challenging due to frequent large-scale fluctuations caused by heterogeneous user behaviors. Traditional time-series models and recent deep neural networks have made progress by capturing temporal patterns, yet they largely overlook latent causal relationships between services that can significantly influence traffic dynamics. In this paper, we propose a novel causal cross-embedded spatio-temporal LSTM (CEST-LSTM) architecture that integrates spatio-temporal modelling with a causal inference mechanism to improve web traffic prediction. The model consists of a spatio-temporal LSTM branch for capturing temporal dependencies across services and a causal branch that leverages convergent cross mapping-based cross-embedding to uncover and incorporate latent inter-service causal influences. A cross-embedding fusion mechanism seamlessly combines these causal features with spatio-temporal representations. On real-world datasets (e.g., Microsoft Azure and Alibaba Cloud), CEST-LSTM achieves a variance-explained prediction accuracy of approximately 93%, surpassing state-of-the-art baselines such as temporal graph convolutional networks (T-GCN) and spatio-temporal attention GCNs (STA-GCN). Comparative experiments and ablation studies confirm that the causal branch consistently improves forecasting accuracy - for example, removing the causal module reduces accuracy by several percentage points. These results demonstrate that integrating latent causal relationship modelling into spatio-temporal neural networks yields substantial improvements in web traffic prediction, offering a promising direction for robust and interpretable forecasting in complex web systems.","PeriodicalId":49952,"journal":{"name":"Journal of Web Engineering","volume":"25 2","pages":"215-248"},"PeriodicalIF":1.0,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11424363","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-03-06DOI: 10.13052/jwe1540-9589.2521
OkHwan Bae;Chung-Pyo Hong
The proliferation of immersive 3D web applications, from e-commerce product viewers to virtual real estate tours, has created a critical need for high-quality, real-time rendering directly within the browser. Neural radiance fields (NeRF) offer unprecedented photorealism but are hamstrung by immense computational demands, making their deployment on resource-constrained web platforms a significant web engineering challenge. The core bottleneck is NeRF's reliance on dense point sampling for volume rendering. This paper introduces a novel framework that directly tackles this challenge through a pioneering adaptive sampling technique powered by reinforcement learning. We name this framework PPO-NeRF. It integrates the rapid training capabilities of Instant-NGP's hash encoding with an agent trained via proximal policy optimization (PPO). This agent learns to adaptively predict the minimal set of crucial sample points along each camera ray, dynamically pruning computationally redundant samples to optimize rendering specifically for web-based, real-time scenarios. Experimental results demonstrate that PPO-NeRF significantly lowers the barrier to web deployment. Compared to the original NeRF, it reduces training time by approximately 73.63%, enabling faster content iteration for web developers. More critically, our adaptive sampling slashes rendering time by approximately 44.7% and VRAM usage by approximately 29.9%, while maintaining comparable visual fidelity. These gains directly translate to faster load times, smoother user interaction, and broader device compatibility. In conclusion, PPO-NeRF provides a practical solution to NeRF's long-standing performance bottlenecks, establishing a viable pathway for deploying high-fidelity, interactive 3D experiences at scale across the modern web.
{"title":"Adaptive Sampling for Real-Time Neural View Synthesis on the Web with Reinforcement Learning","authors":"OkHwan Bae;Chung-Pyo Hong","doi":"10.13052/jwe1540-9589.2521","DOIUrl":"https://doi.org/10.13052/jwe1540-9589.2521","url":null,"abstract":"The proliferation of immersive 3D web applications, from e-commerce product viewers to virtual real estate tours, has created a critical need for high-quality, real-time rendering directly within the browser. Neural radiance fields (NeRF) offer unprecedented photorealism but are hamstrung by immense computational demands, making their deployment on resource-constrained web platforms a significant web engineering challenge. The core bottleneck is NeRF's reliance on dense point sampling for volume rendering. This paper introduces a novel framework that directly tackles this challenge through a pioneering adaptive sampling technique powered by reinforcement learning. We name this framework PPO-NeRF. It integrates the rapid training capabilities of Instant-NGP's hash encoding with an agent trained via proximal policy optimization (PPO). This agent learns to adaptively predict the minimal set of crucial sample points along each camera ray, dynamically pruning computationally redundant samples to optimize rendering specifically for web-based, real-time scenarios. Experimental results demonstrate that PPO-NeRF significantly lowers the barrier to web deployment. Compared to the original NeRF, it reduces training time by approximately 73.63%, enabling faster content iteration for web developers. More critically, our adaptive sampling slashes rendering time by approximately 44.7% and VRAM usage by approximately 29.9%, while maintaining comparable visual fidelity. These gains directly translate to faster load times, smoother user interaction, and broader device compatibility. In conclusion, PPO-NeRF provides a practical solution to NeRF's long-standing performance bottlenecks, establishing a viable pathway for deploying high-fidelity, interactive 3D experiences at scale across the modern web.","PeriodicalId":49952,"journal":{"name":"Journal of Web Engineering","volume":"25 2","pages":"135-152"},"PeriodicalIF":1.0,"publicationDate":"2026-03-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11424339","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"147362588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.13052/jwe1540-9589.2514
Joyeon Park;Jinah Seo;Do. KyoungHwa;Soo Yong Park
In this study, we propose an on-chain-based ML ownership proof system (PK-PoMLO), which combines a digital signature and a blockchain timestamp value to generate a certificate of ownership that is publicly disclosed on-chain, enabling strong claim of ML ownership. First, the owner creates a certificate signed with their private key using the hash value of the ML model and a structured message, and includes a timestamp. This is then used to generate an ML ownership certificate and registered on-chain. At this time, the owner uses their private key to create a standard signature value as a 128-bit mark and embeds it in the ML model. Anyone wishing to verify ML ownership then uses the owner's public key to compare the hash value of the on-chain ML ownership certificate with the timestamp value to verify ML ownership. In other words, we can verify the authenticity of the owner by testing whether the bit error rate (BER) between the mark extracted from the ML ownership certificate and the internally stored mark string satisfies BER $leqtau$, and verifying it with the signature value of the ML ownership certificate. To verify the results of this study, we implement and evaluate a prototype on the MNIST MLP and the Ethereum Sepolia test network.
{"title":"PK-PoMLO: Public Key Proof of ML Ownership System","authors":"Joyeon Park;Jinah Seo;Do. KyoungHwa;Soo Yong Park","doi":"10.13052/jwe1540-9589.2514","DOIUrl":"https://doi.org/10.13052/jwe1540-9589.2514","url":null,"abstract":"In this study, we propose an on-chain-based ML ownership proof system (PK-PoMLO), which combines a digital signature and a blockchain timestamp value to generate a certificate of ownership that is publicly disclosed on-chain, enabling strong claim of ML ownership. First, the owner creates a certificate signed with their private key using the hash value of the ML model and a structured message, and includes a timestamp. This is then used to generate an ML ownership certificate and registered on-chain. At this time, the owner uses their private key to create a standard signature value as a 128-bit mark and embeds it in the ML model. Anyone wishing to verify ML ownership then uses the owner's public key to compare the hash value of the on-chain ML ownership certificate with the timestamp value to verify ML ownership. In other words, we can verify the authenticity of the owner by testing whether the bit error rate (BER) between the mark extracted from the ML ownership certificate and the internally stored mark string satisfies BER <tex>$leqtau$</tex>, and verifying it with the signature value of the ML ownership certificate. To verify the results of this study, we implement and evaluate a prototype on the MNIST MLP and the Ethereum Sepolia test network.","PeriodicalId":49952,"journal":{"name":"Journal of Web Engineering","volume":"25 1","pages":"51-66"},"PeriodicalIF":1.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370661","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098432","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.13052/jwe1540-9589.2511
Dong Bin Choi;Yunhee Kang;Young B. Park
Service-Oriented Architecture (SOA) structures applications into collections of modular, independent, and reusable services. We propose an SOA-based intelligent service agent framework for building AI applications that decomposes complex tasks into independent functional units. In the framework, the agent operates as an intelligent executor that dynamically orchestrates and invokes diverse services and tools to achieve its goals. The agent is exposed as a self-contained service with a well-defined API, allowing external applications to invoke it directly. By instrumenting requests and responses at both the service and agent layers, the framework enables tracing of the agent's capabilities, performance, and decision-making. We present the design of an operational scheme for the agent with DID handling, verifiable credentials (VC), and verifiable presentations (VP). Each of the agents collaborates on a shared workspace based on blackboard to handle tasks to reach a goal. Finally, we demonstrate its feasibility through a proof-of-concept (PoC) for Agentic AI service architecture. This proof-of-concept, structured across Phase 1 (discovery, verification, and scoped authorization) and Phase 2 (problem posting and blackboard-mediated collaboration), demonstrates that DID-backed credentialing can securely support multi-agent execution under a least-privilege operational model.
{"title":"Agentic AI Service Architecture Based on SOA","authors":"Dong Bin Choi;Yunhee Kang;Young B. Park","doi":"10.13052/jwe1540-9589.2511","DOIUrl":"https://doi.org/10.13052/jwe1540-9589.2511","url":null,"abstract":"Service-Oriented Architecture (SOA) structures applications into collections of modular, independent, and reusable services. We propose an SOA-based intelligent service agent framework for building AI applications that decomposes complex tasks into independent functional units. In the framework, the agent operates as an intelligent executor that dynamically orchestrates and invokes diverse services and tools to achieve its goals. The agent is exposed as a self-contained service with a well-defined API, allowing external applications to invoke it directly. By instrumenting requests and responses at both the service and agent layers, the framework enables tracing of the agent's capabilities, performance, and decision-making. We present the design of an operational scheme for the agent with DID handling, verifiable credentials (VC), and verifiable presentations (VP). Each of the agents collaborates on a shared workspace based on blackboard to handle tasks to reach a goal. Finally, we demonstrate its feasibility through a proof-of-concept (PoC) for Agentic AI service architecture. This proof-of-concept, structured across Phase 1 (discovery, verification, and scoped authorization) and Phase 2 (problem posting and blackboard-mediated collaboration), demonstrates that DID-backed credentialing can securely support multi-agent execution under a least-privilege operational model.","PeriodicalId":49952,"journal":{"name":"Journal of Web Engineering","volume":"25 1","pages":"1-18"},"PeriodicalIF":1.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370646","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098435","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.13052/jwe1540-9589.2515
Ziyang Ji;Jie Zhang;Yuji Dong;Ka Lok Man;Steven Guan;Mucheol Kim
Effective management of private keys is crucial to ensure the security and ownership of users' data and digital assets in the Web3 environment. However, existing solutions often fail to adequately address private key management from the user's perspective. Private key leakage and loss incidents occur frequently, resulting in significant losses of digital assets. Moreover, the conventional approach of revoking both the private and public keys after a leakage or loss accident is inconvenient in Web3, where the public key serves as the user's wallet address or digital identity. To tackle the issue of user-side private key management in Web3, this paper presents KeyShield which is a leakage-and-loss-resilient private key protection scheme. KeyShield divides the user's private key into three shares, securely stored across a primary device and a secondary device owned by the user, and a third storage module owned by the user or a semi-trusted service provider. For daily use of the private key, the user only needs to connect the primary and secondary devices. In the event of a leakage or loss, such as device theft or attack, an update process will be triggered to update the three shares, immediately invalidating the leaked or lost share while causing no changes to the public key. As a demonstration of KeyShield, we developed KeyShieldECC accessible on both Android and iOS platforms for managing Elliptic Curve Cryptography (ECC) private keys. The testing results show that for a 256-bit ECC private key, the daily use only needs 0.05 seconds and update needs 0.25 to 0.3 seconds on an ordinary smart phone.
{"title":"KeyShield: Leakage-and-Loss-Resilient Private Key Protection for Web3","authors":"Ziyang Ji;Jie Zhang;Yuji Dong;Ka Lok Man;Steven Guan;Mucheol Kim","doi":"10.13052/jwe1540-9589.2515","DOIUrl":"https://doi.org/10.13052/jwe1540-9589.2515","url":null,"abstract":"Effective management of private keys is crucial to ensure the security and ownership of users' data and digital assets in the Web3 environment. However, existing solutions often fail to adequately address private key management from the user's perspective. Private key leakage and loss incidents occur frequently, resulting in significant losses of digital assets. Moreover, the conventional approach of revoking both the private and public keys after a leakage or loss accident is inconvenient in Web3, where the public key serves as the user's wallet address or digital identity. To tackle the issue of user-side private key management in Web3, this paper presents KeyShield which is a leakage-and-loss-resilient private key protection scheme. KeyShield divides the user's private key into three shares, securely stored across a primary device and a secondary device owned by the user, and a third storage module owned by the user or a semi-trusted service provider. For daily use of the private key, the user only needs to connect the primary and secondary devices. In the event of a leakage or loss, such as device theft or attack, an update process will be triggered to update the three shares, immediately invalidating the leaked or lost share while causing no changes to the public key. As a demonstration of KeyShield, we developed KeyShieldECC accessible on both Android and iOS platforms for managing Elliptic Curve Cryptography (ECC) private keys. The testing results show that for a 256-bit ECC private key, the daily use only needs 0.05 seconds and update needs 0.25 to 0.3 seconds on an ordinary smart phone.","PeriodicalId":49952,"journal":{"name":"Journal of Web Engineering","volume":"25 1","pages":"67-102"},"PeriodicalIF":1.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370645","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098437","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01DOI: 10.13052/jwe1540-9589.2516
Sundara Srivathsan M.;Lighittha P. R.;Prithivraj S.;Suganya Ramamoorthy;Vijayan Sugumaran
Web3 platforms face a critical challenge: once unsafe content is minted on-chain, it becomes immutable and irrevocable. Traditional NSFW classifiers operate off-chain without cryptographic guarantees, leaving blockchain ecosystems vulnerable to harmful content. We present VisionGuard, a unified moderation framework that integrates cost-sensitive AI decision-making with blockchain-based enforcement. Our system combines calibrated NSFW classification, abstention-based triage for uncertain cases, perceptual hashing for near-duplicate detection, and on-chain k-of-n quorum attestation using EIP-712 signatures. We establish formal guarantees for: (i) Bayes-optimal cost-sensitive thresholds minimizing asymmetric error costs, (ii) optimal abstention intervals for human review, (iii) monotone false-negative reduction under classifier-pHash fusion, (iv) quorum compromise bounds, and (v) end-to-end unsafe-mint probability. Empirical validation on a zero-shot NSFW task demonstrates 82% accuracy (AUC = 0.88), with the Bayes-optimal threshold $(tau^{*}=0.1)$ reducing expected cost to 27,520 versus 54,942 at the F1-optimal threshold-a 50% improvement. Calibrated abstention further lowers harm (cost= 10,649.5), while a 3-of-5 quorum with oracle compromise $p=0.1$ yields break probability $P_{text{break}} < 1%$. Together, Vision Guard bridges decision theory, adversarial robustness, and cryptographic enforcement, providing the first provably safe AI moderation pathway for blockchain content.
{"title":"VisionGuard: Cost-Sensitive AI Attestation with Quorum-Verified Blockchain Enforcement","authors":"Sundara Srivathsan M.;Lighittha P. R.;Prithivraj S.;Suganya Ramamoorthy;Vijayan Sugumaran","doi":"10.13052/jwe1540-9589.2516","DOIUrl":"https://doi.org/10.13052/jwe1540-9589.2516","url":null,"abstract":"Web3 platforms face a critical challenge: once unsafe content is minted on-chain, it becomes immutable and irrevocable. Traditional NSFW classifiers operate off-chain without cryptographic guarantees, leaving blockchain ecosystems vulnerable to harmful content. We present VisionGuard, a unified moderation framework that integrates cost-sensitive AI decision-making with blockchain-based enforcement. Our system combines calibrated NSFW classification, abstention-based triage for uncertain cases, perceptual hashing for near-duplicate detection, and on-chain k-of-n quorum attestation using EIP-712 signatures. We establish formal guarantees for: (i) Bayes-optimal cost-sensitive thresholds minimizing asymmetric error costs, (ii) optimal abstention intervals for human review, (iii) monotone false-negative reduction under classifier-pHash fusion, (iv) quorum compromise bounds, and (v) end-to-end unsafe-mint probability. Empirical validation on a zero-shot NSFW task demonstrates 82% accuracy (AUC = 0.88), with the Bayes-optimal threshold <tex>$(tau^{*}=0.1)$</tex> reducing expected cost to 27,520 versus 54,942 at the F1-optimal threshold-a 50% improvement. Calibrated abstention further lowers harm (cost= 10,649.5), while a 3-of-5 quorum with oracle compromise <tex>$p=0.1$</tex> yields break probability <tex>$P_{text{break}} < 1%$</tex>. Together, Vision Guard bridges decision theory, adversarial robustness, and cryptographic enforcement, providing the first provably safe AI moderation pathway for blockchain content.","PeriodicalId":49952,"journal":{"name":"Journal of Web Engineering","volume":"25 1","pages":"103-134"},"PeriodicalIF":1.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370663","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It is challenging to understand Literary Sinitic text from the Joseon dynasty, since there is a lack of explicit word separators, which creates significant semantic ambiguity. To address this, both sentence segmentation and named entity recognition (NER) are essential. We propose a Transformer-based analyzer that performs these two tasks simultaneously. Trained on a labeled corpus from the Seungjeongwon Ilgi, our model effectively segments sentences and identifies named entities, thereby significantly improving the understanding of sentence structure and overall context.
{"title":"Joint Models for Sentence Segmentation and Named Entity Recognition in Literary Sinitic Text","authors":"DongNyeong Heo;Yunhee Kang;Chul Heo;Heeyoul Choi;Kyounghun Jung","doi":"10.13052/jwe1540-9589.2512","DOIUrl":"https://doi.org/10.13052/jwe1540-9589.2512","url":null,"abstract":"It is challenging to understand Literary Sinitic text from the Joseon dynasty, since there is a lack of explicit word separators, which creates significant semantic ambiguity. To address this, both sentence segmentation and named entity recognition (NER) are essential. We propose a Transformer-based analyzer that performs these two tasks simultaneously. Trained on a labeled corpus from the Seungjeongwon Ilgi, our model effectively segments sentences and identifies named entities, thereby significantly improving the understanding of sentence structure and overall context.","PeriodicalId":49952,"journal":{"name":"Journal of Web Engineering","volume":"25 1","pages":"19-32"},"PeriodicalIF":1.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11370662","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146098434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}