Pub Date : 2023-08-23DOI: 10.1108/ijwis-03-2023-0046
Mohamed Hafidi, M. Djezzar, M. Hemam, Fatima Zahra Amara, M. Maimour
Purpose This paper aims to offer a comprehensive examination of the various solutions currently accessible for addressing the challenge of semantic interoperability in cyber physical systems (CPS). CPS is a new generation of systems composed of physical assets with computation capabilities, connected with software systems in a network, exchanging data collected from the physical asset, models (physics-based, data-driven, . . .) and services (reconfiguration, monitoring, . . .). The physical asset and its software system are connected, and they exchange data to be interpreted in a certain context. The heterogeneous nature of the collected data together with different types of models rise interoperability problems. Modeling the digital space of the CPS and integrating information models that support cyber physical interoperability together are required. Design/methodology/approach This paper aims to identify the most relevant points in the development of semantic models and machine learning solutions to the interoperability problem, and how these solutions are implemented in CPS. The research analyzes recent papers related to the topic of semantic interoperability in Industry 4.0 (I4.0) systems. Findings Semantic models are key enabler technologies that provide a common understanding of data, and they can be used to solve interoperability problems in Industry by using a common vocabulary when defining these models. Originality/value This paper provides an overview of the different available solutions to the semantic interoperability problem in CPS.
{"title":"Semantic web and machine learning techniques addressing semantic interoperability in Industry 4.0","authors":"Mohamed Hafidi, M. Djezzar, M. Hemam, Fatima Zahra Amara, M. Maimour","doi":"10.1108/ijwis-03-2023-0046","DOIUrl":"https://doi.org/10.1108/ijwis-03-2023-0046","url":null,"abstract":"\u0000Purpose\u0000This paper aims to offer a comprehensive examination of the various solutions currently accessible for addressing the challenge of semantic interoperability in cyber physical systems (CPS). CPS is a new generation of systems composed of physical assets with computation capabilities, connected with software systems in a network, exchanging data collected from the physical asset, models (physics-based, data-driven, . . .) and services (reconfiguration, monitoring, . . .). The physical asset and its software system are connected, and they exchange data to be interpreted in a certain context. The heterogeneous nature of the collected data together with different types of models rise interoperability problems. Modeling the digital space of the CPS and integrating information models that support cyber physical interoperability together are required.\u0000\u0000\u0000Design/methodology/approach\u0000This paper aims to identify the most relevant points in the development of semantic models and machine learning solutions to the interoperability problem, and how these solutions are implemented in CPS. The research analyzes recent papers related to the topic of semantic interoperability in Industry 4.0 (I4.0) systems.\u0000\u0000\u0000Findings\u0000Semantic models are key enabler technologies that provide a common understanding of data, and they can be used to solve interoperability problems in Industry by using a common vocabulary when defining these models.\u0000\u0000\u0000Originality/value\u0000This paper provides an overview of the different available solutions to the semantic interoperability problem in CPS.\u0000","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42798563","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-16DOI: 10.1108/ijwis-03-2023-0042
A. Khobragade, S. Ghumbre, V. Pachghare
Purpose MITRE and the National Security Agency cooperatively developed and maintained a D3FEND knowledge graph (KG). It provides concepts as an entity from the cybersecurity countermeasure domain, such as dynamic, emulated and file analysis. Those entities are linked by applying relationships such as analyze, may_contains and encrypt. A fundamental challenge for collaborative designers is to encode knowledge and efficiently interrelate the cyber-domain facts generated daily. However, the designers manually update the graph contents with new or missing facts to enrich the knowledge. This paper aims to propose an automated approach to predict the missing facts using the link prediction task, leveraging embedding as representation learning. Design/methodology/approach D3FEND is available in the resource description framework (RDF) format. In the preprocessing step, the facts in RDF format converted to subject–predicate–object triplet format contain 5,967 entities and 98 relationship types. Progressive distance-based, bilinear and convolutional embedding models are applied to learn the embeddings of entities and relations. This study presents a link prediction task to infer missing facts using learned embeddings. Findings Experimental results show that the translational model performs well on high-rank results, whereas the bilinear model is superior in capturing the latent semantics of complex relationship types. However, the convolutional model outperforms 44% of the true facts and achieves a 3% improvement in results compared to other models. Research limitations/implications Despite the success of embedding models to enrich D3FEND using link prediction under the supervised learning setup, it has some limitations, such as not capturing diversity and hierarchies of relations. The average node degree of D3FEND KG is 16.85, with 12% of entities having a node degree less than 2, especially there are many entities or relations with few or no observed links. This results in sparsity and data imbalance, which affect the model performance even after increasing the embedding vector size. Moreover, KG embedding models consider existing entities and relations and may not incorporate external or contextual information such as textual descriptions, temporal dynamics or domain knowledge, which can enhance the link prediction performance. Practical implications Link prediction in the D3FEND KG can benefit cybersecurity countermeasure strategies in several ways, such as it can help to identify gaps or weaknesses in the existing defensive methods and suggest possible ways to improve or augment them; it can help to compare and contrast different defensive methods and understand their trade-offs and synergies; it can help to discover novel or emerging defensive methods by inferring new relations from existing data or external sources; and it can help to generate recommendations or guidance for selecting or deploying appropriate defensive methods based on the
{"title":"Infer the missing facts of D3FEND using knowledge graph representation learning","authors":"A. Khobragade, S. Ghumbre, V. Pachghare","doi":"10.1108/ijwis-03-2023-0042","DOIUrl":"https://doi.org/10.1108/ijwis-03-2023-0042","url":null,"abstract":"\u0000Purpose\u0000MITRE and the National Security Agency cooperatively developed and maintained a D3FEND knowledge graph (KG). It provides concepts as an entity from the cybersecurity countermeasure domain, such as dynamic, emulated and file analysis. Those entities are linked by applying relationships such as analyze, may_contains and encrypt. A fundamental challenge for collaborative designers is to encode knowledge and efficiently interrelate the cyber-domain facts generated daily. However, the designers manually update the graph contents with new or missing facts to enrich the knowledge. This paper aims to propose an automated approach to predict the missing facts using the link prediction task, leveraging embedding as representation learning.\u0000\u0000\u0000Design/methodology/approach\u0000D3FEND is available in the resource description framework (RDF) format. In the preprocessing step, the facts in RDF format converted to subject–predicate–object triplet format contain 5,967 entities and 98 relationship types. Progressive distance-based, bilinear and convolutional embedding models are applied to learn the embeddings of entities and relations. This study presents a link prediction task to infer missing facts using learned embeddings.\u0000\u0000\u0000Findings\u0000Experimental results show that the translational model performs well on high-rank results, whereas the bilinear model is superior in capturing the latent semantics of complex relationship types. However, the convolutional model outperforms 44% of the true facts and achieves a 3% improvement in results compared to other models.\u0000\u0000\u0000Research limitations/implications\u0000Despite the success of embedding models to enrich D3FEND using link prediction under the supervised learning setup, it has some limitations, such as not capturing diversity and hierarchies of relations. The average node degree of D3FEND KG is 16.85, with 12% of entities having a node degree less than 2, especially there are many entities or relations with few or no observed links. This results in sparsity and data imbalance, which affect the model performance even after increasing the embedding vector size. Moreover, KG embedding models consider existing entities and relations and may not incorporate external or contextual information such as textual descriptions, temporal dynamics or domain knowledge, which can enhance the link prediction performance.\u0000\u0000\u0000Practical implications\u0000Link prediction in the D3FEND KG can benefit cybersecurity countermeasure strategies in several ways, such as it can help to identify gaps or weaknesses in the existing defensive methods and suggest possible ways to improve or augment them; it can help to compare and contrast different defensive methods and understand their trade-offs and synergies; it can help to discover novel or emerging defensive methods by inferring new relations from existing data or external sources; and it can help to generate recommendations or guidance for selecting or deploying appropriate defensive methods based on the","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46125880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose Unsupervised domain adaptation object detection not only mitigates model terrible performance resulting from domain gap, but also has the ability to apply knowledge trained on a definite domain to a distinct domain. However, aligning the whole feature may confuse the object and background information, making it challenging to extract discriminative features. This paper aims to propose an improved approach which is called intrinsic feature extraction domain adaptation (IFEDA) to extract discriminative features effectively. Design/methodology/approach IFEDA consists of the intrinsic feature extraction (IFE) module and object consistency constraint (OCC). The IFE module, designed on the instance level, mainly solves the issue of the difficult extraction of discriminative object features. Specifically, the discriminative region of the objects can be paid more attention to. Meanwhile, the OCC is deployed to determine whether category prediction in the target domain brings into correspondence with it in the source domain. Findings Experimental results demonstrate the validity of our approach and achieve good outcomes on challenging data sets. Research limitations/implications Limitations to this research are that only one target domain is applied, and it may change the ability of model generalization when the problem of insufficient data sets or unseen domain appeared. Originality/value This paper solves the issue of critical information defects by tackling the difficulty of extracting discriminative features. And the categories in both domains are compelled to be consistent for better object detection.
{"title":"Intrinsic feature extraction for unsupervised domain adaptation","authors":"Xinzhi Cao, Yinsai Guo, Wenbin Yang, Xiangfeng Luo, Shaorong Xie","doi":"10.1108/ijwis-04-2023-0062","DOIUrl":"https://doi.org/10.1108/ijwis-04-2023-0062","url":null,"abstract":"\u0000Purpose\u0000Unsupervised domain adaptation object detection not only mitigates model terrible performance resulting from domain gap, but also has the ability to apply knowledge trained on a definite domain to a distinct domain. However, aligning the whole feature may confuse the object and background information, making it challenging to extract discriminative features. This paper aims to propose an improved approach which is called intrinsic feature extraction domain adaptation (IFEDA) to extract discriminative features effectively.\u0000\u0000\u0000Design/methodology/approach\u0000IFEDA consists of the intrinsic feature extraction (IFE) module and object consistency constraint (OCC). The IFE module, designed on the instance level, mainly solves the issue of the difficult extraction of discriminative object features. Specifically, the discriminative region of the objects can be paid more attention to. Meanwhile, the OCC is deployed to determine whether category prediction in the target domain brings into correspondence with it in the source domain.\u0000\u0000\u0000Findings\u0000Experimental results demonstrate the validity of our approach and achieve good outcomes on challenging data sets.\u0000\u0000\u0000Research limitations/implications\u0000Limitations to this research are that only one target domain is applied, and it may change the ability of model generalization when the problem of insufficient data sets or unseen domain appeared.\u0000\u0000\u0000Originality/value\u0000This paper solves the issue of critical information defects by tackling the difficulty of extracting discriminative features. And the categories in both domains are compelled to be consistent for better object detection.\u0000","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":"1 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41913805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Purpose This paper aims to implement and extend the You Only Live Once (YOLO) algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural network once to detect the objects in an image, which is why it is powerful and fast. Cameras are found at many different crossroads and locations, but video processing of the feed through an object detection algorithm allows determining and tracking what is captured. Video Surveillance has many applications such as Car Tracking and tracking of people related to crime prevention. This paper provides exhaustive comparison between the existing methods and proposed method. Proposed method is found to have highest object detection accuracy. Design/methodology/approach The goal of this research is to develop a deep learning framework to automate the task of analyzing video footage through object detection in images. This framework processes video feed or image frames from CCTV, webcam or a DroidCam, which allows the camera in a mobile phone to be used as a webcam for a laptop. The object detection algorithm, with its model trained on a large data set of images, is able to load in each image given as an input, process the image and determine the categories of the matching objects that it finds. As a proof of concept, this research demonstrates the algorithm on images of several different objects. This research implements and extends the YOLO algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural network once to detect the objects in an image, which is why it is powerful and fast. Cameras are found at many different crossroads and locations, but video processing of the feed through an object detection algorithm allows determining and tracking what is captured. For video surveillance of traffic cameras, this has many applications, such as car tracking and person tracking for crime prevention. In this research, the implemented algorithm with the proposed methodology is compared against several different prior existing methods in literature. The proposed method was found to have the highest object detection accuracy for object detection and activity recognition, better than other existing methods. Findings The results indicate that the proposed deep learning–based model can be implemented in real-time for object detection and activity recognition. The added features of car crash detection, fall detection and social distancing detection can be used to implement a real-time video surveillance system that can help save lives and protect people. Such a real-time video surveillance system could be installed at street and traffic cameras and in CCTV systems. When this system would detect a car crash or a fatal human or pedestrian fall with injury, it can be programmed to send automatic messages to the nearest local police, emergency and fire stations. When this system would detect a social distancing violation, it can be programmed
本文旨在实现和扩展“你只活一次”(You Only Live Once, YOLO)算法,用于检测物体和活动。YOLO的优点是它只运行一次神经网络来检测图像中的物体,这就是为什么它强大而快速的原因。在许多不同的十字路口和地点都可以找到摄像头,但通过物体检测算法对馈送的视频进行处理,可以确定和跟踪捕获的内容。视频监控有许多应用,如汽车跟踪和与预防犯罪有关的人员跟踪。本文对现有方法和提出的方法进行了详尽的比较。结果表明,该方法具有较高的目标检测精度。设计/方法/方法本研究的目标是开发一个深度学习框架,通过图像中的目标检测来自动分析视频片段。这个框架处理来自闭路电视、网络摄像头或DroidCam的视频或图像帧,这使得手机中的摄像头可以用作笔记本电脑的网络摄像头。物体检测算法的模型是在一个大的图像数据集上训练的,它能够加载作为输入的每个图像,处理图像并确定它找到的匹配物体的类别。作为概念验证,本研究在多个不同物体的图像上演示了该算法。本研究实现并扩展了用于目标和活动检测的YOLO算法。YOLO的优点是它只运行一次神经网络来检测图像中的物体,这就是为什么它强大而快速的原因。在许多不同的十字路口和地点都可以找到摄像头,但通过物体检测算法对馈送的视频进行处理,可以确定和跟踪捕获的内容。对于交通摄像机的视频监控,这有许多应用,如车辆跟踪和预防犯罪的人员跟踪。在本研究中,采用所提出的方法实现的算法与文献中几种不同的现有方法进行了比较。结果表明,该方法在目标检测和活动识别方面具有最高的目标检测精度,优于现有方法。结果表明,所提出的基于深度学习的模型可以实时实现目标检测和活动识别。增加的碰撞检测、坠落检测和社交距离检测功能可用于实现实时视频监控系统,有助于挽救生命和保护人民。这种实时录像监视系统可以安装在街道和交通摄影机以及闭路电视系统中。当这个系统检测到车祸或致命的人或行人摔倒受伤时,它可以被编程为向最近的当地警察局、急救站和消防站发送自动信息。当该系统发现违反社交距离的行为时,可以通过编程通知地方当局或发出警报,提醒公众保持距离,避免传播可能导致新冠病毒等病毒传播的气溶胶颗粒。原创性/价值本文提出了YOLOv3模型的改进和增强版本,该模型已扩展到进行活动识别,例如汽车碰撞检测,人体跌倒检测和社交距离检测。该模型基于深度学习卷积神经网络模型,用于检测图像中的物体。该模型使用广泛使用和公开可用的上下文公共对象数据集进行训练。该模型是YOLO的扩展,可以实现实时目标和活动识别。该模型在大尺度和全尺度目标检测中均具有较高的精度。该模型在将目标检测扩展和增强到活动识别方面也超越了以往所有比较的方法。该模型在汽车碰撞检测、跌倒检测和社交距离检测中具有最高的准确率。
{"title":"Object detection and activity recognition in video surveillance using neural networks","authors":"Vishva Payghode, Ayush Goyal, Anupama Bhan, S. Iyer, Ashwani Kumar Dubey","doi":"10.1108/ijwis-01-2023-0006","DOIUrl":"https://doi.org/10.1108/ijwis-01-2023-0006","url":null,"abstract":"\u0000Purpose\u0000This paper aims to implement and extend the You Only Live Once (YOLO) algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural network once to detect the objects in an image, which is why it is powerful and fast. Cameras are found at many different crossroads and locations, but video processing of the feed through an object detection algorithm allows determining and tracking what is captured. Video Surveillance has many applications such as Car Tracking and tracking of people related to crime prevention. This paper provides exhaustive comparison between the existing methods and proposed method. Proposed method is found to have highest object detection accuracy.\u0000\u0000\u0000Design/methodology/approach\u0000The goal of this research is to develop a deep learning framework to automate the task of analyzing video footage through object detection in images. This framework processes video feed or image frames from CCTV, webcam or a DroidCam, which allows the camera in a mobile phone to be used as a webcam for a laptop. The object detection algorithm, with its model trained on a large data set of images, is able to load in each image given as an input, process the image and determine the categories of the matching objects that it finds. As a proof of concept, this research demonstrates the algorithm on images of several different objects. This research implements and extends the YOLO algorithm for detection of objects and activities. The advantage of YOLO is that it only runs a neural network once to detect the objects in an image, which is why it is powerful and fast. Cameras are found at many different crossroads and locations, but video processing of the feed through an object detection algorithm allows determining and tracking what is captured. For video surveillance of traffic cameras, this has many applications, such as car tracking and person tracking for crime prevention. In this research, the implemented algorithm with the proposed methodology is compared against several different prior existing methods in literature. The proposed method was found to have the highest object detection accuracy for object detection and activity recognition, better than other existing methods.\u0000\u0000\u0000Findings\u0000The results indicate that the proposed deep learning–based model can be implemented in real-time for object detection and activity recognition. The added features of car crash detection, fall detection and social distancing detection can be used to implement a real-time video surveillance system that can help save lives and protect people. Such a real-time video surveillance system could be installed at street and traffic cameras and in CCTV systems. When this system would detect a car crash or a fatal human or pedestrian fall with injury, it can be programmed to send automatic messages to the nearest local police, emergency and fire stations. When this system would detect a social distancing violation, it can be programmed ","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":" ","pages":""},"PeriodicalIF":1.6,"publicationDate":"2023-04-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41971395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-12-02DOI: 10.1108/IJWIS-03-2019-0008
Guellil Imane, Darwish Kareem, Azouaou Faical
This paper aims to propose an approach to automatically annotate a large corpus in Arabic dialect. This corpus is used in order to analyse sentiments of Arabic users on social medias. It focuses on the Algerian dialect, which is a sub-dialect of Maghrebi Arabic. Although Algerian is spoken by roughly 40 million speakers, few studies address the automated processing in general and the sentiment analysis in specific for Algerian.,The approach is based on the construction and use of a sentiment lexicon to automatically annotate a large corpus of Algerian text that is extracted from Facebook. Using this approach allow to significantly increase the size of the training corpus without calling the manual annotation. The annotated corpus is then vectorized using document embedding (doc2vec), which is an extension of word embeddings (word2vec). For sentiments classification, the authors used different classifiers such as support vector machines (SVM), Naive Bayes (NB) and logistic regression (LR).,The results suggest that NB and SVM classifiers generally led to the best results and MLP generally had the worst results. Further, the threshold that the authors use in selecting messages for the training set had a noticeable impact on recall and precision, with a threshold of 0.6 producing the best results. Using PV-DBOW led to slightly higher results than using PV-DM. Combining PV-DBOW and PV-DM representations led to slightly lower results than using PV-DBOW alone. The best results were obtained by the NB classifier with F1 up to 86.9 per cent.,The principal originality of this paper is to determine the right parameters for automatically annotating an Algerian dialect corpus. This annotation is based on a sentiment lexicon that was also constructed automatically.
{"title":"A set of parameters for automatically annotating a Sentiment Arabic Corpus","authors":"Guellil Imane, Darwish Kareem, Azouaou Faical","doi":"10.1108/IJWIS-03-2019-0008","DOIUrl":"https://doi.org/10.1108/IJWIS-03-2019-0008","url":null,"abstract":"This paper aims to propose an approach to automatically annotate a large corpus in Arabic dialect. This corpus is used in order to analyse sentiments of Arabic users on social medias. It focuses on the Algerian dialect, which is a sub-dialect of Maghrebi Arabic. Although Algerian is spoken by roughly 40 million speakers, few studies address the automated processing in general and the sentiment analysis in specific for Algerian.,The approach is based on the construction and use of a sentiment lexicon to automatically annotate a large corpus of Algerian text that is extracted from Facebook. Using this approach allow to significantly increase the size of the training corpus without calling the manual annotation. The annotated corpus is then vectorized using document embedding (doc2vec), which is an extension of word embeddings (word2vec). For sentiments classification, the authors used different classifiers such as support vector machines (SVM), Naive Bayes (NB) and logistic regression (LR).,The results suggest that NB and SVM classifiers generally led to the best results and MLP generally had the worst results. Further, the threshold that the authors use in selecting messages for the training set had a noticeable impact on recall and precision, with a threshold of 0.6 producing the best results. Using PV-DBOW led to slightly higher results than using PV-DM. Combining PV-DBOW and PV-DM representations led to slightly lower results than using PV-DBOW alone. The best results were obtained by the NB classifier with F1 up to 86.9 per cent.,The principal originality of this paper is to determine the right parameters for automatically annotating an Algerian dialect corpus. This annotation is based on a sentiment lexicon that was also constructed automatically.","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":"15 1","pages":"594-615"},"PeriodicalIF":1.6,"publicationDate":"2019-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86674990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-01-01DOI: 10.1108/IJWIS-04-2018-0021
Juan Camilo González-Vargas, Angela Carrillo Ramos, R. Fabregat, Lizzeth Camargo, Maria Caridad García Cepero, Jaime A. Pavlich-Mariscal
Purpose The purpose of this paper is to describe a support system to the selection of enrichment activities in educational environment called RunayaySoft, where Runayay comes from the word Quechua that means develop and Soft as it is an informatics tool that supports the educational institutions and their students, in the selection of activities that allow foster some of their skills based on their interests, learning styles, aptitudes, multiple intelligences, preferences and so on. Moreover, it suggests institutions about the activities that they should make in their building considering student´s characteristics and the agreements that they have. Design/methodology/approach It does a diagnostic for identifying which characteristics are going to be considered to students and institutions. Then, it generates adaptive profiles with the aim of generating suggestions of enrichment activities that allow to boost some of their skills. For the students were considered their preferences, learning style, aptitude, multiple intelligences and interests. In the case of institutions were the agreements, resources and activities that they develop. Based on this information, it defines the relations for the generation of suggestions of activities toward students, where it does the prioritization of which activities should be considered. Findings For validating the system, it was done as a functional prototype that generates suggestions to students, as well as educative institutions, through a satisfaction test student assess if they agree or disagree with the suggestions given. With that assessment, it is validated the relationship between student’s characteristics, activity and institution are related for generating activities suggestions. Research limitations/implications RunayaySoft generates adaptive profiles for the students, activity and institution. Each profile has information that allows adapt an advice toward students and institutions. Social implications RunayaySoft considers student’s characteristics, activities and educational institutions for generating suggestions for enrichment activities that allow to boost some of their skills. Many times, when activities are generated in educative institutions, they are not considered a learner’s needs and characteristics. For that reason, the system helps institutions to identify activities that should be done in their facilities or with those institutions which they have agreements when the institutions that students come from do not have the required resources. Originality/value RunayaySoft suggests enrichment activities to students as well as educative institutions. For students, it suggests disciplinary areas where they can boost their skills; for each disciplinary area are recommended activities based on their preferences. Once students select the disciplinary area and activities, the system suggests educative institutions activities that they can do. If the institutions do not have the neces
{"title":"RunayaySoft","authors":"Juan Camilo González-Vargas, Angela Carrillo Ramos, R. Fabregat, Lizzeth Camargo, Maria Caridad García Cepero, Jaime A. Pavlich-Mariscal","doi":"10.1108/IJWIS-04-2018-0021","DOIUrl":"https://doi.org/10.1108/IJWIS-04-2018-0021","url":null,"abstract":"Purpose The purpose of this paper is to describe a support system to the selection of enrichment activities in educational environment called RunayaySoft, where Runayay comes from the word Quechua that means develop and Soft as it is an informatics tool that supports the educational institutions and their students, in the selection of activities that allow foster some of their skills based on their interests, learning styles, aptitudes, multiple intelligences, preferences and so on. Moreover, it suggests institutions about the activities that they should make in their building considering student´s characteristics and the agreements that they have. Design/methodology/approach It does a diagnostic for identifying which characteristics are going to be considered to students and institutions. Then, it generates adaptive profiles with the aim of generating suggestions of enrichment activities that allow to boost some of their skills. For the students were considered their preferences, learning style, aptitude, multiple intelligences and interests. In the case of institutions were the agreements, resources and activities that they develop. Based on this information, it defines the relations for the generation of suggestions of activities toward students, where it does the prioritization of which activities should be considered. Findings For validating the system, it was done as a functional prototype that generates suggestions to students, as well as educative institutions, through a satisfaction test student assess if they agree or disagree with the suggestions given. With that assessment, it is validated the relationship between student’s characteristics, activity and institution are related for generating activities suggestions. Research limitations/implications RunayaySoft generates adaptive profiles for the students, activity and institution. Each profile has information that allows adapt an advice toward students and institutions. Social implications RunayaySoft considers student’s characteristics, activities and educational institutions for generating suggestions for enrichment activities that allow to boost some of their skills. Many times, when activities are generated in educative institutions, they are not considered a learner’s needs and characteristics. For that reason, the system helps institutions to identify activities that should be done in their facilities or with those institutions which they have agreements when the institutions that students come from do not have the required resources. Originality/value RunayaySoft suggests enrichment activities to students as well as educative institutions. For students, it suggests disciplinary areas where they can boost their skills; for each disciplinary area are recommended activities based on their preferences. Once students select the disciplinary area and activities, the system suggests educative institutions activities that they can do. If the institutions do not have the neces","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":"15 1","pages":"103-131"},"PeriodicalIF":1.6,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1108/IJWIS-04-2018-0021","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62040530","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2013-06-24DOI: 10.1108/IJWIS-12-2013-0037
T. K. Dang, K. T. L. Thi, Anh Tuan Dang, H. Van
XACML is an international standard used for access control in distributed systems. However, XACML and its existing extensions are not sufficient to fulfil sophisticated security requirements (e.g. access control based on user’s roles, context-aware authorizations, and the ability of reasoning). Remarkably, X-STROWL, a generalized extension of XACML, is a comprehensive model that overcomes these shortcomings. Among a large amount of open sources implementing XACML, HERAS-AF is chosen as the most suitable framework to be extended to implement X-STROWL model. This paper mainly focuses on the architecture design of proposed framework and the comparison with other frameworks. In addition, a case study will be presented to clarify the work-flow of this framework. This is the crucial contribution of our research to provide a holistic, extensible and intelligent authorization decision engine.
{"title":"Towards a flexible framework to support a generalized extension of XACML for spatio-temporal RBAC model with reasoning ability","authors":"T. K. Dang, K. T. L. Thi, Anh Tuan Dang, H. Van","doi":"10.1108/IJWIS-12-2013-0037","DOIUrl":"https://doi.org/10.1108/IJWIS-12-2013-0037","url":null,"abstract":"XACML is an international standard used for access control in distributed systems. However, XACML and its existing extensions are not sufficient to fulfil sophisticated security requirements (e.g. access control based on user’s roles, context-aware authorizations, and the ability of reasoning). Remarkably, X-STROWL, a generalized extension of XACML, is a comprehensive model that overcomes these shortcomings. Among a large amount of open sources implementing XACML, HERAS-AF is chosen as the most suitable framework to be extended to implement X-STROWL model. This paper mainly focuses on the architecture design of proposed framework and the comparison with other frameworks. In addition, a case study will be presented to clarify the work-flow of this framework. This is the crucial contribution of our research to provide a holistic, extensible and intelligent authorization decision engine.","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":"19 1","pages":"437-451"},"PeriodicalIF":1.6,"publicationDate":"2013-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85719790","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we describe a load distribution technique that employs web workers. We have been implementing a web-based MORPG as an interactive, real-time web application; previously, the web server alone was responsible for manipulating the behavior of all the game characters. As more users logged in, the workload on the server was increased. Hence, we have implemented a technique whereby the CPU load of the server is distributed among the clients; a performance evaluation reveals that our technique plays a role in decreasing the CGI latency of low-end servers and can decrease the CPU load of high-end servers when many users are logged in.
{"title":"Load distribution by using web workers for a real-time web application","authors":"S. Okamoto, Masaki Kohana","doi":"10.1145/1967486.1967577","DOIUrl":"https://doi.org/10.1145/1967486.1967577","url":null,"abstract":"In this paper, we describe a load distribution technique that employs web workers. We have been implementing a web-based MORPG as an interactive, real-time web application; previously, the web server alone was responsible for manipulating the behavior of all the game characters. As more users logged in, the workload on the server was increased. Hence, we have implemented a technique whereby the CPU load of the server is distributed among the clients; a performance evaluation reveals that our technique plays a role in decreasing the CGI latency of low-end servers and can decrease the CPU load of high-end servers when many users are logged in.","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":"7 1","pages":"592-597"},"PeriodicalIF":1.6,"publicationDate":"2010-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89102018","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2008-06-20DOI: 10.1108/17440080810882342
Nikolaos Fousteris, M. Gergatsoulis, Y. Stavrakas
Purpose – In a wide spectrum of applications, it is desirable to manipulate semistructured information that may present variations according to different circumstances. Multidimensional XML (MXML) is an extension of XML suitable for representing data that assume different facets, having different value and/or structure under different contexts. The purpose of this paper is to develop techniques for updating MXML documents.Design/methodology/approach – Updating XML has been studied in the past, however, updating MXML must take into account the additional features, which stem from incorporating context into MXML. This paper investigates the problem of updating MXML in two levels: at the graph level, i.e. in an implementation independent way; and at the relational storage level.Findings – The paper introduces six basic update operations, which are capable of any possible change. Those operations are specified in an implementation independent way, and their effect explained through examples. Algorithms are gi...
{"title":"Updating multidimensional XML documents","authors":"Nikolaos Fousteris, M. Gergatsoulis, Y. Stavrakas","doi":"10.1108/17440080810882342","DOIUrl":"https://doi.org/10.1108/17440080810882342","url":null,"abstract":"Purpose – In a wide spectrum of applications, it is desirable to manipulate semistructured information that may present variations according to different circumstances. Multidimensional XML (MXML) is an extension of XML suitable for representing data that assume different facets, having different value and/or structure under different contexts. The purpose of this paper is to develop techniques for updating MXML documents.Design/methodology/approach – Updating XML has been studied in the past, however, updating MXML must take into account the additional features, which stem from incorporating context into MXML. This paper investigates the problem of updating MXML in two levels: at the graph level, i.e. in an implementation independent way; and at the relational storage level.Findings – The paper introduces six basic update operations, which are capable of any possible change. Those operations are specified in an implementation independent way, and their effect explained through examples. Algorithms are gi...","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":"19 1","pages":"257-266"},"PeriodicalIF":1.6,"publicationDate":"2008-06-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85626656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-12-20DOI: 10.1108/IJWIS.2007.36203DAA.001
E. Shakshuki
{"title":"Advances in agent and non-agent software engineering methodologies on the web and software systems","authors":"E. Shakshuki","doi":"10.1108/IJWIS.2007.36203DAA.001","DOIUrl":"https://doi.org/10.1108/IJWIS.2007.36203DAA.001","url":null,"abstract":"","PeriodicalId":44153,"journal":{"name":"International Journal of Web Information Systems","volume":"3 1","pages":""},"PeriodicalIF":1.6,"publicationDate":"2007-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62040480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}