首页 > 最新文献

Int. J. Web Eng. Technol.最新文献

英文 中文
A framework for implementing micro frontend architecture 实现微前端架构的框架
Pub Date : 2021-12-01 DOI: 10.7753/ijcatr1012.1002
Sylvester Timona Wanjala
Web applications are an indispensable part of any enterprise information system. In the recent past, we have seen maturity in technologies that enable the separation of frontend and backend, with the backend adopting microservices architecture style. The front end has maintained the traditional monolithic architecture. Micro frontends have come up as a solution to the conventional monolithic frontend, which has received much attention. Still, so far, there is no straightforward approach to implementation that satisfies different practical requirements of a modern web application. This paper proposes an architectural pattern for implementing micro frontends to address challenges experienced in earlier implementations, such as inconsistent user experience, managing security, and complexity. We developed two simple web applications, one using the proposed architectural pattern, and another using the monolithic architecture and compared the performance. We used Google lighthouse to measure the performance of two applications. The landing page for the application developed using micro frontend architecture showed a higher performance score of 99 against 86 for a similar page in an application developed using the monolithic architecture. The proposed framework showed outstanding performance in handling the issues of consistent layout with a Cumulative Layout shift of 0. Breaking down the frontend with lazy loading of micro frontends improved the web application's performance, while the proposed framework reduced development complexity. However, more research is needed to provide seamless integration of micro frontends into the main application with the support of loading shared libraries in the main application; this will significantly reduce the payload size.
Web应用程序是任何企业信息系统不可缺少的一部分。在最近的过去,我们已经看到了支持前端和后端分离的技术的成熟,后端采用微服务架构风格。前端保持了传统的整体架构。微前端作为传统单片前端的一种解决方案而出现,受到了广泛的关注。但是,到目前为止,还没有一种直接的实现方法可以满足现代web应用程序的不同实际需求。本文提出了一种实现微前端的体系结构模式,以解决早期实现中遇到的挑战,例如不一致的用户体验、管理安全性和复杂性。我们开发了两个简单的web应用程序,一个使用提出的架构模式,另一个使用单片架构,并比较了性能。我们使用Google lighthouse来测量两个应用程序的性能。使用微前端架构开发的应用程序的登陆页面的性能得分为99分,而使用单片架构开发的应用程序中的类似页面得分为86分。所提出的框架在处理累计布局位移为0时的布局一致性问题上表现出色。通过微前端的延迟加载来分解前端,提高了web应用程序的性能,而所提出的框架降低了开发的复杂性。然而,需要更多的研究来提供微前端与主应用程序的无缝集成,并支持在主应用程序中加载共享库;这将显著减少有效载荷大小。
{"title":"A framework for implementing micro frontend architecture","authors":"Sylvester Timona Wanjala","doi":"10.7753/ijcatr1012.1002","DOIUrl":"https://doi.org/10.7753/ijcatr1012.1002","url":null,"abstract":"Web applications are an indispensable part of any enterprise information system. In the recent past, we have seen maturity in technologies that enable the separation of frontend and backend, with the backend adopting microservices architecture style. The front end has maintained the traditional monolithic architecture. Micro frontends have come up as a solution to the conventional monolithic frontend, which has received much attention. Still, so far, there is no straightforward approach to implementation that satisfies different practical requirements of a modern web application. This paper proposes an architectural pattern for implementing micro frontends to address challenges experienced in earlier implementations, such as inconsistent user experience, managing security, and complexity. We developed two simple web applications, one using the proposed architectural pattern, and another using the monolithic architecture and compared the performance. We used Google lighthouse to measure the performance of two applications. The landing page for the application developed using micro frontend architecture showed a higher performance score of 99 against 86 for a similar page in an application developed using the monolithic architecture. The proposed framework showed outstanding performance in handling the issues of consistent layout with a Cumulative Layout shift of 0. Breaking down the frontend with lazy loading of micro frontends improved the web application's performance, while the proposed framework reduced development complexity. However, more research is needed to provide seamless integration of micro frontends into the main application with the support of loading shared libraries in the main application; this will significantly reduce the payload size.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131325815","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A knowledge governance framework for open innovation projects 开放式创新项目的知识治理框架
Pub Date : 2020-09-16 DOI: 10.1504/IJWET.2020.109731
R. Bernsteiner, Thomas Dilger, Christian Ploder
Markets tend to develop always faster with ever-growing requirements on products and services. This forces enterprises to cooperate with partners like suppliers, customers, or even competitors across organisational borders to cope with these challenges. Such a collaboration leads to knowledge flows between all partners. Being too open and sharing too much information can cause knowledge leakage. The central aim of this research is to provide a framework on how to integrate knowledge governance mechanisms in open innovation projects to ensure eligibility in practice. Insights from the field have been integrated into the framework by interviewing ten experts who have practical experiences on open innovation projects. The interviews have been conducted in 2019 and analysed later on. Based on scientific literature and insights from the field, a knowledge governance framework to guide through open innovation projects has been developed.
随着人们对产品和服务的要求越来越高,市场的发展也越来越快。这迫使企业与供应商、客户,甚至是跨组织边界的竞争对手等合作伙伴合作,以应对这些挑战。这样的合作导致所有合作伙伴之间的知识流动。过于开放和分享过多的信息会导致知识泄漏。本研究的核心目的是提供一个框架,探讨如何在开放式创新项目中整合知识治理机制,以确保实践中的合格性。通过采访10位在开放式创新项目中具有实践经验的专家,将该领域的见解整合到框架中。这些采访是在2019年进行的,之后会进行分析。基于科学文献和该领域的见解,开发了一个指导开放式创新项目的知识治理框架。
{"title":"A knowledge governance framework for open innovation projects","authors":"R. Bernsteiner, Thomas Dilger, Christian Ploder","doi":"10.1504/IJWET.2020.109731","DOIUrl":"https://doi.org/10.1504/IJWET.2020.109731","url":null,"abstract":"Markets tend to develop always faster with ever-growing requirements on products and services. This forces enterprises to cooperate with partners like suppliers, customers, or even competitors across organisational borders to cope with these challenges. Such a collaboration leads to knowledge flows between all partners. Being too open and sharing too much information can cause knowledge leakage. The central aim of this research is to provide a framework on how to integrate knowledge governance mechanisms in open innovation projects to ensure eligibility in practice. Insights from the field have been integrated into the framework by interviewing ten experts who have practical experiences on open innovation projects. The interviews have been conducted in 2019 and analysed later on. Based on scientific literature and insights from the field, a knowledge governance framework to guide through open innovation projects has been developed.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114273758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A model-driven approach for the verification of an adaptive service composition 用于验证自适应服务组合的模型驱动方法
Pub Date : 2020-06-08 DOI: 10.1504/ijwet.2020.107678
S. Zatout, Mahmoud Boufaïda, M. Benabdelhafid, M. Berkane
The development of web service compositions is a complex task that needs coherent mechanisms in order to maintain the quality of the provided business process and to satisfy user needs. This paper proposes a development process of an adaptable composed web service and mainly focuses on the reliability and the performance properties. It explores the model driven architecture transformation technique in order to formally model the whole service orchestration using the timed coloured Petri net formalism. The software CPN Tools offers, among others, the ASK-computational tree logic, the model checking technique and several monitors that will be exploited to describe and verify different properties at design time. They will also be used via access/CPN library in order to reason about the reconfiguration technique at runtime. An example of an identity card management process is given to prove the feasibility of the proposed solution.
web服务组合的开发是一项复杂的任务,需要一致的机制来维护所提供业务流程的质量并满足用户需求。本文提出了一种自适应组合web服务的开发过程,重点研究了其可靠性和性能特性。它探索了模型驱动的架构转换技术,以便使用定时彩色Petri网形式化对整个服务编排进行形式化建模。CPN Tools软件提供了ask计算树逻辑、模型检查技术和几个监视器,这些监视器将用于在设计时描述和验证不同的属性。它们还将通过access/CPN库使用,以便在运行时推断重新配置技术。最后以一个身份证管理流程为例,验证了该方案的可行性。
{"title":"A model-driven approach for the verification of an adaptive service composition","authors":"S. Zatout, Mahmoud Boufaïda, M. Benabdelhafid, M. Berkane","doi":"10.1504/ijwet.2020.107678","DOIUrl":"https://doi.org/10.1504/ijwet.2020.107678","url":null,"abstract":"The development of web service compositions is a complex task that needs coherent mechanisms in order to maintain the quality of the provided business process and to satisfy user needs. This paper proposes a development process of an adaptable composed web service and mainly focuses on the reliability and the performance properties. It explores the model driven architecture transformation technique in order to formally model the whole service orchestration using the timed coloured Petri net formalism. The software CPN Tools offers, among others, the ASK-computational tree logic, the model checking technique and several monitors that will be exploited to describe and verify different properties at design time. They will also be used via access/CPN library in order to reason about the reconfiguration technique at runtime. An example of an identity card management process is given to prove the feasibility of the proposed solution.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"132 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121902824","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
A secure VM allocation scheme to preserve against co-resident threat 一种防止共同驻留威胁的安全虚拟机分配方案
Pub Date : 2020-06-03 DOI: 10.1504/ijwet.2020.107686
S. Chhabra, Ashutosh Kumar Singh
Preserving the secrecy in cloud system is one of the biggest concerns for the cloud customers who faces security risks in the context of load balancing. The co-resident attacks are widely used by attackers, where malicious users build side channels and extract private information from VMs. The proposed model evaluates the possibility of VM co-residency and success rate of an attack. The emphasis of this paper is to reduce the possibility of co-resident attacks among different users. When cloud data centres receive requests for the tasks deployment, then the proposed system will find out the secure physical machine under VM allocation policies while avoiding the threats. The performance is calculated by these metrics: makespan, resource utilisation, co-residency probability and co-resident success rate. The results show that the most virtual machine allocation policy (MVMP) effectively reduces the risk under the safe states. The framework significantly improves the security by reducing the shared servers up to 32.2% and enhances the resource utilisation up to 44.14% over least VM allocation policy (LVMP), round robin VM allocation policy (RRVMP) allocation schemes.
保持云系统的保密性是云客户在负载平衡环境中面临安全风险的最大问题之一。共同驻留攻击被攻击者广泛使用,恶意用户通过建立侧通道从虚拟机中提取私有信息。该模型评估了虚拟机共驻留的可能性和攻击成功率。本文的重点是减少不同用户之间共同驻留攻击的可能性。当云数据中心收到任务部署请求时,该系统将在虚拟机分配策略下找到安全的物理机,同时避免威胁。性能由以下指标计算:makespan、资源利用率、共同驻留概率和共同驻留成功率。结果表明,在安全状态下,最优虚拟机分配策略(MVMP)能有效降低风险。与LVMP (least VM allocation policy)和RRVMP (round robin VM allocation policy)分配方案相比,该框架将共享服务器减少32.2%,显著提高了安全性,并将资源利用率提高了44.14%。
{"title":"A secure VM allocation scheme to preserve against co-resident threat","authors":"S. Chhabra, Ashutosh Kumar Singh","doi":"10.1504/ijwet.2020.107686","DOIUrl":"https://doi.org/10.1504/ijwet.2020.107686","url":null,"abstract":"Preserving the secrecy in cloud system is one of the biggest concerns for the cloud customers who faces security risks in the context of load balancing. The co-resident attacks are widely used by attackers, where malicious users build side channels and extract private information from VMs. The proposed model evaluates the possibility of VM co-residency and success rate of an attack. The emphasis of this paper is to reduce the possibility of co-resident attacks among different users. When cloud data centres receive requests for the tasks deployment, then the proposed system will find out the secure physical machine under VM allocation policies while avoiding the threats. The performance is calculated by these metrics: makespan, resource utilisation, co-residency probability and co-resident success rate. The results show that the most virtual machine allocation policy (MVMP) effectively reduces the risk under the safe states. The framework significantly improves the security by reducing the shared servers up to 32.2% and enhances the resource utilisation up to 44.14% over least VM allocation policy (LVMP), round robin VM allocation policy (RRVMP) allocation schemes.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123625269","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
The research on two phase pickup vehicle routing based on the K-means++ and genetic algorithms 基于k -means++和遗传算法的两相皮卡车辆路径研究
Pub Date : 2020-06-03 DOI: 10.1504/ijwet.2020.10029813
Huan Zhao, Yiping Yang
A popular topic of interest is the development of an efficient vehicle routing plan, which needs to meet customer requirements and ensure delivery with the lowest cost. This paper established a model of the vehicle routing problem with a time window and static network considering the vehicle type, type of goods, and customer satisfaction requirements to build an optimisation model. By optimising the combination of the K-means++ and genetic algorithms, the problem is transformed into a two stage solution, supplier clustering is performed using the K-means++ algorithm, and the vehicle path is determined using the genetic algorithm in each cluster arrangement. Finally, the optimisation results are compared with the actual delivery data, which demonstrates that the optimisation results are superior to the current vehicle arrangement in terms of vehicle utilisation and cost. Finally, an example is presented to illustrate the feasibility of the proposed algorithm.
一个热门的话题是制定一个有效的车辆路线计划,该计划需要满足客户的需求,并确保以最低的成本交付。本文建立了考虑车辆类型、货物类型和顾客满意度要求的带时间窗口静态网络车辆路径问题模型,建立了优化模型。通过优化k -means++和遗传算法的组合,将问题转化为两阶段求解,使用k -means++算法进行供应商聚类,并在每个聚类安排中使用遗传算法确定车辆路径。最后,将优化结果与实际配送数据进行了比较,结果表明,优化结果在车辆利用率和成本方面都优于当前车辆配置。最后,通过一个算例说明了该算法的可行性。
{"title":"The research on two phase pickup vehicle routing based on the K-means++ and genetic algorithms","authors":"Huan Zhao, Yiping Yang","doi":"10.1504/ijwet.2020.10029813","DOIUrl":"https://doi.org/10.1504/ijwet.2020.10029813","url":null,"abstract":"A popular topic of interest is the development of an efficient vehicle routing plan, which needs to meet customer requirements and ensure delivery with the lowest cost. This paper established a model of the vehicle routing problem with a time window and static network considering the vehicle type, type of goods, and customer satisfaction requirements to build an optimisation model. By optimising the combination of the K-means++ and genetic algorithms, the problem is transformed into a two stage solution, supplier clustering is performed using the K-means++ algorithm, and the vehicle path is determined using the genetic algorithm in each cluster arrangement. Finally, the optimisation results are compared with the actual delivery data, which demonstrates that the optimisation results are superior to the current vehicle arrangement in terms of vehicle utilisation and cost. Finally, an example is presented to illustrate the feasibility of the proposed algorithm.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134120410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Short text classification using feature enrichment from credible texts 基于可信文本特征丰富的短文本分类
Pub Date : 2020-06-03 DOI: 10.1504/ijwet.2020.107689
Issa Alsmadi, Gan Keng Hoon
Classifying Tweet's contents can become a useful feature for other application tasks. However, such classification can be quite challenging due to the short length and sparsity of tweet contents. Although individual tweets have limited length, their contents delve into different topics. Therefore, due to such diverse contents, achieving good coverage of content features remains a challenge. We adopt the expansion of keywords technique in this research and study the enrichment of tweet contents using text from credible sources, such as news sites. For evaluation, we conduct experiments on two Twitter datasets using four standard classifiers. The proposed approach has enhanced the performance of the classification task, with improvements in accuracy ranging from +0.05% to +3.54% for both datasets. Experimental results positively demonstrate that the proposed feature enrichment method can overcome the sparseness limitation of short text with improved classification performances when running on various classifiers.
对Tweet内容进行分类可以成为其他应用程序任务的有用功能。然而,由于tweet内容的短长度和稀疏性,这种分类可能相当具有挑战性。尽管单个tweet的长度有限,但它们的内容涉及不同的主题。因此,由于内容的多样性,实现内容特征的良好覆盖仍然是一个挑战。在本研究中,我们采用关键词扩展技术,利用可信来源(如新闻网站)的文本来丰富推文内容。为了评估,我们使用四个标准分类器在两个Twitter数据集上进行实验。提出的方法提高了分类任务的性能,两个数据集的准确率提高了+0.05%到+3.54%。实验结果表明,本文提出的特征富集方法能够克服短文本的稀疏性限制,在各种分类器上运行时,分类性能得到了提高。
{"title":"Short text classification using feature enrichment from credible texts","authors":"Issa Alsmadi, Gan Keng Hoon","doi":"10.1504/ijwet.2020.107689","DOIUrl":"https://doi.org/10.1504/ijwet.2020.107689","url":null,"abstract":"Classifying Tweet's contents can become a useful feature for other application tasks. However, such classification can be quite challenging due to the short length and sparsity of tweet contents. Although individual tweets have limited length, their contents delve into different topics. Therefore, due to such diverse contents, achieving good coverage of content features remains a challenge. We adopt the expansion of keywords technique in this research and study the enrichment of tweet contents using text from credible sources, such as news sites. For evaluation, we conduct experiments on two Twitter datasets using four standard classifiers. The proposed approach has enhanced the performance of the classification task, with improvements in accuracy ranging from +0.05% to +3.54% for both datasets. Experimental results positively demonstrate that the proposed feature enrichment method can overcome the sparseness limitation of short text with improved classification performances when running on various classifiers.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117116669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Improvement of TCP Vegas algorithm based on forward direction delay 基于正向延迟的TCP Vegas算法改进
Pub Date : 2020-06-03 DOI: 10.1504/ijwet.2020.107690
Shijie Guan, Yueqiu Jiang, Qixue Guan
Satellite networks transmit data through the space communications protocol specification transport protocol and uses transmission control protocol (TCP) Vegas as the congestion control algorithm. However, TCP Vegas does not have a suitable solution for the asymmetric bandwidth of satellite networks. Therefore, the reverse link of asymmetric bandwidth frequently causes congestion in satellite networks. This issue is addressed by reducing the congestion window with the occurrence of reverse link congestion, thereby simultaneously reducing the forward link throughput of the satellite network. In this study, a forward congestion control algorithm for the TCP Vegas algorithm based on time delay, which is called Vegas_FDD (forward direction delay), is proposed to mitigate congestion by dividing it into different types (forward and backward) and to improve network bandwidth utilisation. The suitability and effectiveness of the proposed algorithm are verified through simulation on Opnet software.
卫星网络通过空间通信协议规范传输协议传输数据,并使用传输控制协议(TCP) Vegas作为拥塞控制算法。然而,TCP Vegas对于卫星网络的非对称带宽没有一个合适的解决方案。因此,带宽不对称的反向链路经常造成卫星网络的拥塞。通过减少反向链路拥塞发生时的拥塞窗口来解决这个问题,从而同时降低卫星网络的正向链路吞吐量。本研究针对TCP Vegas算法提出了一种基于时间延迟的前向拥塞控制算法Vegas_FDD (forward direction delay),通过将拥塞分为不同类型(前向和后向)来缓解拥塞,提高网络带宽利用率。通过在Opnet软件上的仿真,验证了该算法的适用性和有效性。
{"title":"Improvement of TCP Vegas algorithm based on forward direction delay","authors":"Shijie Guan, Yueqiu Jiang, Qixue Guan","doi":"10.1504/ijwet.2020.107690","DOIUrl":"https://doi.org/10.1504/ijwet.2020.107690","url":null,"abstract":"Satellite networks transmit data through the space communications protocol specification transport protocol and uses transmission control protocol (TCP) Vegas as the congestion control algorithm. However, TCP Vegas does not have a suitable solution for the asymmetric bandwidth of satellite networks. Therefore, the reverse link of asymmetric bandwidth frequently causes congestion in satellite networks. This issue is addressed by reducing the congestion window with the occurrence of reverse link congestion, thereby simultaneously reducing the forward link throughput of the satellite network. In this study, a forward congestion control algorithm for the TCP Vegas algorithm based on time delay, which is called Vegas_FDD (forward direction delay), is proposed to mitigate congestion by dividing it into different types (forward and backward) and to improve network bandwidth utilisation. The suitability and effectiveness of the proposed algorithm are verified through simulation on Opnet software.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128470928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Anomaly detection in the web logs using user-behaviour networks 使用用户行为网络对web日志进行异常检测
Pub Date : 2019-10-03 DOI: 10.1504/ijwet.2019.102871
J. You, Xiaojuan Wang, Lei Jin, Yong Zhang
With the rapid growth of the web attacks, anomaly detection becomes a necessary part in the management of modern large-scale distributed web applications. As the record of the user behaviour, web logs certainly become the research object relate to anomaly detection. Many anomaly detection methods based on automated log analysis have been proposed. However, most researches focus on the content of the single logs, while ignoring the connection between the user and the path. To address this problem, we introduce the graph theory into the anomaly detection and establish a user behaviour network model. Integrating the network structure and the characteristic of anomalous users, we propose five indicators to identify the anomalous users and the anomalous logs. Results show that the method gets a better performance on four real web application log datasets, with a total of about 4 million log messages and 1 million anomalous instances. In addition, this paper integrates and improves a state-of-the-art anomaly detection method, to further analyse the composition of the anomalous logs. We believe that our work will bring a new angle to the research field of the anomaly detection.
随着web攻击的快速增长,异常检测成为现代大规模分布式web应用管理中必不可少的一部分。web日志作为用户行为的记录,必然成为异常检测的研究对象。许多基于自动日志分析的异常检测方法已经被提出。然而,大多数研究都集中在单个日志的内容上,而忽略了用户与路径之间的联系。为了解决这一问题,我们将图论引入到异常检测中,建立了用户行为网络模型。结合网络结构和异常用户的特点,提出了识别异常用户和异常日志的五种指标。结果表明,该方法在4个真实的web应用程序日志数据集上获得了较好的性能,这些日志数据集共包含约400万条日志消息和100万个异常实例。此外,本文整合并改进了一种最新的异常检测方法,进一步分析了异常测井的组成。我们相信我们的工作将为异常检测的研究领域带来一个新的角度。
{"title":"Anomaly detection in the web logs using user-behaviour networks","authors":"J. You, Xiaojuan Wang, Lei Jin, Yong Zhang","doi":"10.1504/ijwet.2019.102871","DOIUrl":"https://doi.org/10.1504/ijwet.2019.102871","url":null,"abstract":"With the rapid growth of the web attacks, anomaly detection becomes a necessary part in the management of modern large-scale distributed web applications. As the record of the user behaviour, web logs certainly become the research object relate to anomaly detection. Many anomaly detection methods based on automated log analysis have been proposed. However, most researches focus on the content of the single logs, while ignoring the connection between the user and the path. To address this problem, we introduce the graph theory into the anomaly detection and establish a user behaviour network model. Integrating the network structure and the characteristic of anomalous users, we propose five indicators to identify the anomalous users and the anomalous logs. Results show that the method gets a better performance on four real web application log datasets, with a total of about 4 million log messages and 1 million anomalous instances. In addition, this paper integrates and improves a state-of-the-art anomaly detection method, to further analyse the composition of the anomalous logs. We believe that our work will bring a new angle to the research field of the anomaly detection.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125673621","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
DWSpyder: a new schema extraction method for a deep web integration system DWSpyder:一种新的深度web集成系统模式提取方法
Pub Date : 2019-10-03 DOI: 10.1504/ijwet.2019.102872
Yasser Saissi, A. Zellou, Ali Adri
The deep web is a huge part of the web that is not indexed by search engines. The deep web sources are accessible only through their associated access forms. We wish to use a web integration system to access the deep web sources and all of their information. To implement this web integration system, we need to know the schema description of each web source. The problem resolved in this paper is how to extract the schema describing an inaccessible deep web source. We propose our DWSpyder method as being able to extract the schema describing a deep web source despite its inaccessibility. The DWSpyder method starts with a static analysis of the deep web source access forms in order to extract the first elements of the associated schema description. The second step of our method is a dynamic analysis of these access forms using queries to enrich our schema description. Our DWSpyder method also uses a clustering algorithm to identify the possible values of deep web form fields with undefined sets of values. All of the information extracted is used by DWSpyder to generate automatically deep web source schema descriptions.
深网是网络的一个巨大部分,没有被搜索引擎索引。深层网络资源只能通过其相关的访问形式访问。我们希望使用一个网络集成系统来访问深网资源和他们所有的信息。为了实现这个web集成系统,我们需要知道每个web源的模式描述。本文解决的问题是如何提取描述不可访问深度web源的模式。我们提出DWSpyder方法,因为它能够提取描述深层网络源的模式,尽管它是不可访问的。DWSpyder方法首先对深网源访问表单进行静态分析,以便提取相关模式描述的第一个元素。我们方法的第二步是使用查询对这些访问表单进行动态分析,以丰富我们的模式描述。我们的DWSpyder方法还使用聚类算法来识别具有未定义值集的深网表单字段的可能值。DWSpyder使用提取的所有信息自动生成深网源模式描述。
{"title":"DWSpyder: a new schema extraction method for a deep web integration system","authors":"Yasser Saissi, A. Zellou, Ali Adri","doi":"10.1504/ijwet.2019.102872","DOIUrl":"https://doi.org/10.1504/ijwet.2019.102872","url":null,"abstract":"The deep web is a huge part of the web that is not indexed by search engines. The deep web sources are accessible only through their associated access forms. We wish to use a web integration system to access the deep web sources and all of their information. To implement this web integration system, we need to know the schema description of each web source. The problem resolved in this paper is how to extract the schema describing an inaccessible deep web source. We propose our DWSpyder method as being able to extract the schema describing a deep web source despite its inaccessibility. The DWSpyder method starts with a static analysis of the deep web source access forms in order to extract the first elements of the associated schema description. The second step of our method is a dynamic analysis of these access forms using queries to enrich our schema description. Our DWSpyder method also uses a clustering algorithm to identify the possible values of deep web form fields with undefined sets of values. All of the information extracted is used by DWSpyder to generate automatically deep web source schema descriptions.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130019190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Impact of replica placement-based clustering on fault tolerance in grid computing 网格计算中基于副本位置的聚类对容错性的影响
Pub Date : 2019-10-03 DOI: 10.1504/ijwet.2019.102873
Rahma Souli-Jbali, Minyar Sassi Hidri, R. Ayed
Due to several demands on very high computing power and storage capacity, data grids seem to be a good solution to meet these growing demands. However, the design of distributed applications for data grids remains complex, and it is necessary to take into account the dynamic nature of the grids since the nodes may disappear at any time. We focus on problems related to the impact of replica placement-based clustering on fault tolerance in grids. In inter-clusters, the message-logging protocol is used. In intra-cluster, the inter-clusters protocol is coupled with the non-blocking coordinated checkpoint of Chandy-Lamport. This ensures that in case of failure, the impact of the fault would remain confined to the nodes of the same cluster. The experiment results show the efficiency of the proposed protocol in terms of time recovery, numbers of either used processes or exchanged messages.
由于对非常高的计算能力和存储容量的一些需求,数据网格似乎是满足这些不断增长的需求的一个很好的解决方案。然而,用于数据网格的分布式应用程序的设计仍然很复杂,并且有必要考虑网格的动态性,因为节点可能随时消失。我们主要研究了基于副本放置的聚类对网格容错的影响。在集群间,使用消息日志记录协议。在集群内,集群间协议与Chandy-Lamport的非阻塞协调检查点相结合。这确保了在发生故障的情况下,故障的影响仍然局限于同一集群的节点。实验结果表明,该协议在时间恢复、使用进程数和交换消息数方面都是有效的。
{"title":"Impact of replica placement-based clustering on fault tolerance in grid computing","authors":"Rahma Souli-Jbali, Minyar Sassi Hidri, R. Ayed","doi":"10.1504/ijwet.2019.102873","DOIUrl":"https://doi.org/10.1504/ijwet.2019.102873","url":null,"abstract":"Due to several demands on very high computing power and storage capacity, data grids seem to be a good solution to meet these growing demands. However, the design of distributed applications for data grids remains complex, and it is necessary to take into account the dynamic nature of the grids since the nodes may disappear at any time. We focus on problems related to the impact of replica placement-based clustering on fault tolerance in grids. In inter-clusters, the message-logging protocol is used. In intra-cluster, the inter-clusters protocol is coupled with the non-blocking coordinated checkpoint of Chandy-Lamport. This ensures that in case of failure, the impact of the fault would remain confined to the nodes of the same cluster. The experiment results show the efficiency of the proposed protocol in terms of time recovery, numbers of either used processes or exchanged messages.","PeriodicalId":396746,"journal":{"name":"Int. J. Web Eng. Technol.","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116403925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Int. J. Web Eng. Technol.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1