Pub Date : 2022-07-27DOI: 10.33965/ijcsis_2022170101
Christian Uhl, Bernd Freisleben
BigBlueButton (BBB) is a web conferencing system designed for online learning. It consists of a set of pre-configured open-source software tools to realize video conferencing functionality primarily for teaching purposes. Due to the COVID-19 pandemic, our university decided to roll out BBB for the university's educational activities in the first nationwide lock-down in early 2020. Based on our experiences in deploying, operating, and using BBB at our university for about 12 months, we present suggestions on how the services provided by BBB can be improved to meet the technical demands identified during online lecturing at our university. Our suggestions include the introduction of simulcast, improvements of encoding and muxing video feeds, and the 'Last -N' algorithm for video feed pagination. To demonstrate the benefits of the presented improvements, we experimentally evaluated most of them based on our own prototypical implementations.
{"title":"PERFORMANCE IMPROVEMENTS OF BIGBLUEBUTTON FOR DISTANCE TEACHING","authors":"Christian Uhl, Bernd Freisleben","doi":"10.33965/ijcsis_2022170101","DOIUrl":"https://doi.org/10.33965/ijcsis_2022170101","url":null,"abstract":"BigBlueButton (BBB) is a web conferencing system designed for online learning. It consists of a set of pre-configured open-source software tools to realize video conferencing functionality primarily for teaching purposes. Due to the COVID-19 pandemic, our university decided to roll out BBB for the university's educational activities in the first nationwide lock-down in early 2020. Based on our experiences in deploying, operating, and using BBB at our university for about 12 months, we present suggestions on how the services provided by BBB can be improved to meet the technical demands identified during online lecturing at our university. Our suggestions include the introduction of simulcast, improvements of encoding and muxing video feeds, and the 'Last -N' algorithm for video feed pagination. To demonstrate the benefits of the presented improvements, we experimentally evaluated most of them based on our own prototypical implementations.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"16 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87605949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-07-27DOI: 10.33965/ijcsis_2022170105
Yusuf Bozkurt, Reiner Braun, Alexander Rossmann
Literature reviews are essential for any scientific work, both as part of a dissertation or as a stand-alone work. Scientists benefit from the fact that more and more literature is available in electronic form, and finding and accessing relevant literature has become more accessible through scientific databases. However, a traditional literature review method is characterized by a highly manual process, while technologies and methods in big data, machine learning, and text mining have advanced. Especially in areas where research streams are rapidly evolving, and topics are becoming more comprehensive, complex, and heterogeneous, it is challenging to provide a holistic overview and identify research gaps manually. Therefore, we have developed a framework that supports the traditional approach of conducting a literature review using machine learning and text mining methods. The framework is particularly suitable in cases where a large amount of literature is available, and a holistic understanding of the research area is needed. The framework consists of several steps in which the critical mind of the scientist is supported by machine learning. The unstructured text data is transformed into a structured form through data preparation realized with text mining, making it applicable for various machine learning techniques. A concrete example in the field of smart cities makes the framework tangible.
{"title":"THE APPLICATION OF MACHINE LEARNING IN LITERATURE REVIEWS: A FRAMEWORK","authors":"Yusuf Bozkurt, Reiner Braun, Alexander Rossmann","doi":"10.33965/ijcsis_2022170105","DOIUrl":"https://doi.org/10.33965/ijcsis_2022170105","url":null,"abstract":"Literature reviews are essential for any scientific work, both as part of a dissertation or as a stand-alone work. Scientists benefit from the fact that more and more literature is available in electronic form, and finding and accessing relevant literature has become more accessible through scientific databases. However, a traditional literature review method is characterized by a highly manual process, while technologies and methods in big data, machine learning, and text mining have advanced. Especially in areas where research streams are rapidly evolving, and topics are becoming more comprehensive, complex, and heterogeneous, it is challenging to provide a holistic overview and identify research gaps manually. Therefore, we have developed a framework that supports the traditional approach of conducting a literature review using machine learning and text mining methods. The framework is particularly suitable in cases where a large amount of literature is available, and a holistic understanding of the research area is needed. The framework consists of several steps in which the critical mind of the scientist is supported by machine learning. The unstructured text data is transformed into a structured form through data preparation realized with text mining, making it applicable for various machine learning techniques. A concrete example in the field of smart cities makes the framework tangible.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"21 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86036967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.33965/ijcsis_2021160203
Douglas Omwenga, Guohua Liu
Hyperspectral imaging (HSI) classification has recently become a field of interest in the remote sensing (RS) community. However, such data contain multidimensional dynamic features that make it difficult for precise identification. Also, it covers structurally nonlinear affinity within the gathered spectral bands and the related materials. To systematically facilitate the HSI categorization, we propose a spectral-spatial classification of HSI data using a 3D-2D convolutional neural network and inception network to extract and learn the in-depth spectral-spatial feature vectors. We first applied the principal component analysis (PCA) on the entire HSI image to reduce the original space dimensionality. Second, the exploitation of the spatial hyperspectral input features contiguous information by 2-D CNN. Besides, we used 3-D CNN without relying on any preprocessing to extract deep spectral-spatial fused features efficiently. The learned spectral-spatial characteristics are concatenated and fed to the inception network layer for joint spectral-spatial learning. Furthermore, we learned and achieved the correct classification with a softmax regression classifier. Finally, we evaluated our model performance on different training set sizes of two hyperspectral remote sensing data sets (HSRSI), namely Botswana (BT) and Kennedy Space Center (KSC), and compared the experimental results with deep learning-based and state-of-the-art (SOTA) classification methods. The experiment results show that our model provides competitive classification results with state-of-the-art techniques, demonstrating the considerable potential for HSRSI classification.
{"title":"SPECTRAL-SPATIAL CLASSIFICATION OF HYPERSPECTRAL DATA USING 3D-2D CONVOLUTIONAL NEURAL NETWORK AND INCEPTION NETWORK","authors":"Douglas Omwenga, Guohua Liu","doi":"10.33965/ijcsis_2021160203","DOIUrl":"https://doi.org/10.33965/ijcsis_2021160203","url":null,"abstract":"Hyperspectral imaging (HSI) classification has recently become a field of interest in the remote sensing (RS) community. However, such data contain multidimensional dynamic features that make it difficult for precise identification. Also, it covers structurally nonlinear affinity within the gathered spectral bands and the related materials. To systematically facilitate the HSI categorization, we propose a spectral-spatial classification of HSI data using a 3D-2D convolutional neural network and inception network to extract and learn the in-depth spectral-spatial feature vectors. We first applied the principal component analysis (PCA) on the entire HSI image to reduce the original space dimensionality. Second, the exploitation of the spatial hyperspectral input features contiguous information by 2-D CNN. Besides, we used 3-D CNN without relying on any preprocessing to extract deep spectral-spatial fused features efficiently. The learned spectral-spatial characteristics are concatenated and fed to the inception network layer for joint spectral-spatial learning. Furthermore, we learned and achieved the correct classification with a softmax regression classifier. Finally, we evaluated our model performance on different training set sizes of two hyperspectral remote sensing data sets (HSRSI), namely Botswana (BT) and Kennedy Space Center (KSC), and compared the experimental results with deep learning-based and state-of-the-art (SOTA) classification methods. The experiment results show that our model provides competitive classification results with state-of-the-art techniques, demonstrating the considerable potential for HSRSI classification.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"8 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78999145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.33965/ijcsis_2021160205
Hanna Koskinen, S. Aromaa, V. Goriachev
Our transport system is currently undergoing fundamental change due to increasing use of automation. New automation solutions are introduced in all sectors of transportation, for example automated metros and autonomous ships are in the visions of technology developers. There are many reasons for this ongoing trend of higher use of automation such as demands for sustainability and efficiency to mention some. In this paper, we present a research and development effort aiming at introducing an automatic tram, that is to say SmartTram. In particular, we concentrate on how the changing role of human (as driver, passenger and member of other user groups) is acknowledged in the design of a new automatic tram. For this reason, we present a human factors engineering program for automated trams. The special focus is on how the relevant user groups may be involved in design within the defined program. This approach can be utilized also in other sectors when increasing automation.
{"title":"HUMAN FACTORS ENGINEERING PROGRAM DEVELOPMENT AND USER INVOLVEMENT IN DESIGN OF AUTOMATIC TRAM","authors":"Hanna Koskinen, S. Aromaa, V. Goriachev","doi":"10.33965/ijcsis_2021160205","DOIUrl":"https://doi.org/10.33965/ijcsis_2021160205","url":null,"abstract":"Our transport system is currently undergoing fundamental change due to increasing use of automation. New automation solutions are introduced in all sectors of transportation, for example automated metros and autonomous ships are in the visions of technology developers. There are many reasons for this ongoing trend of higher use of automation such as demands for sustainability and efficiency to mention some. In this paper, we present a research and development effort aiming at introducing an automatic tram, that is to say SmartTram. In particular, we concentrate on how the changing role of human (as driver, passenger and member of other user groups) is acknowledged in the design of a new automatic tram. For this reason, we present a human factors engineering program for automated trams. The special focus is on how the relevant user groups may be involved in design within the defined program. This approach can be utilized also in other sectors when increasing automation.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"27 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91090451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.33965/ijcsis_2021160201
C. G. Silva
Analysts may use matrix-based visualizations (such as heatmaps) to reveal patterns of a dataset with the help of reordering algorithms that permute matrix rows and columns properly. One of these algorithms is Polar Sort, a pattern-focused reordering method that uses a multidimensional projection technique – Classical MDS – to reveal Band and Circumplex patterns in reorderable matrices. Despite its good reordering results regarding the mentioned patterns, Polar sort is not scalable due to Classical MDS’ asymptotic time complexity (O(n3) for an input matrix with size n × n). In this paper, we propose a new version of this algorithm, in which we replace Classical MDS with FastMap, a method with asymptotic time complexity O(n). The new algorithm (Polar Sort with Fastmap, or PSF for short) permutes rows and columns according to their bidimensional projections and uses a barycenter-based ordering identical to Polar Sort’s approach. The results of an experiment indicate that PSF maintained the output quality of Polar Sort regarding minimal span loss function, Moore stress, and circular correlation when reordering synthetic matrices. Besides, PSF’s asymptotic time complexity is O(n log n). This complexity is coherent with our experiment results, which point out that PSF had lower execution time than other compared methods. We also show some examples in which real-world matrices reordered by PSF revealed patterns similar to Band and Circumplex.
{"title":"REVEALING BAND AND CIRCUMPLEX PATTERNS IN REORDERABLE MATRICES USING POLAR SORT AND FAST MULTIDIMENSIONAL PROJECTIONS","authors":"C. G. Silva","doi":"10.33965/ijcsis_2021160201","DOIUrl":"https://doi.org/10.33965/ijcsis_2021160201","url":null,"abstract":"Analysts may use matrix-based visualizations (such as heatmaps) to reveal patterns of a dataset with the help of reordering algorithms that permute matrix rows and columns properly. One of these algorithms is Polar Sort, a pattern-focused reordering method that uses a multidimensional projection technique – Classical MDS – to reveal Band and Circumplex patterns in reorderable matrices. Despite its good reordering results regarding the mentioned patterns, Polar sort is not scalable due to Classical MDS’ asymptotic time complexity (O(n3) for an input matrix with size n × n). In this paper, we propose a new version of this algorithm, in which we replace Classical MDS with FastMap, a method with asymptotic time complexity O(n). The new algorithm (Polar Sort with Fastmap, or PSF for short) permutes rows and columns according to their bidimensional projections and uses a barycenter-based ordering identical to Polar Sort’s approach. The results of an experiment indicate that PSF maintained the output quality of Polar Sort regarding minimal span loss function, Moore stress, and circular correlation when reordering synthetic matrices. Besides, PSF’s asymptotic time complexity is O(n log n). This complexity is coherent with our experiment results, which point out that PSF had lower execution time than other compared methods. We also show some examples in which real-world matrices reordered by PSF revealed patterns similar to Band and Circumplex.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"43 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80194299","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.33965/ijcsis_2021160202
Damiano Oriti, A. Sanna, Francesco De Pace, Federico Manuri, Francesco Tamburello, Fabrizio Ronzino
Augmented reality (AR) and virtual reality (VR) applications can take advantage of efficient digitalization of real objects as reconstructed elements can allow users a better connection between real and virtual worlds than using pre-set 3D CAD models. Technology advances contribute to the spread of AR and VR technologies, which are always more diffuse and popular. On the other hand, the design and implementation of virtual and extended worlds is still an open problem; affordable and robust solutions to support 3D object digitalization is still missing. This work proposes a reconstruction system that allows users to receive a 3D CAD model starting from a single image of the object to be digitalized and reconstructed. A smartphone can be used to take a photo of the object under analysis and a remote server performs the reconstruction process by exploiting a pipeline of three Deep Learning methods. Accuracy and robustness of the system have been assessed by several experiments and the main outcomes show how the proposed solution has a comparable accuracy (chamfer distance) with the state-of-the-art methods for 3D object reconstruction.
{"title":"3D SCENE RECONSTRUCTION SYSTEM BASED ON A MOBILE DEVICE","authors":"Damiano Oriti, A. Sanna, Francesco De Pace, Federico Manuri, Francesco Tamburello, Fabrizio Ronzino","doi":"10.33965/ijcsis_2021160202","DOIUrl":"https://doi.org/10.33965/ijcsis_2021160202","url":null,"abstract":"Augmented reality (AR) and virtual reality (VR) applications can take advantage of efficient digitalization of real objects as reconstructed elements can allow users a better connection between real and virtual worlds than using pre-set 3D CAD models. Technology advances contribute to the spread of AR and VR technologies, which are always more diffuse and popular. On the other hand, the design and implementation of virtual and extended worlds is still an open problem; affordable and robust solutions to support 3D object digitalization is still missing. This work proposes a reconstruction system that allows users to receive a 3D CAD model starting from a single image of the object to be digitalized and reconstructed. A smartphone can be used to take a photo of the object under analysis and a remote server performs the reconstruction process by exploiting a pipeline of three Deep Learning methods. Accuracy and robustness of the system have been assessed by several experiments and the main outcomes show how the proposed solution has a comparable accuracy (chamfer distance) with the state-of-the-art methods for 3D object reconstruction.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"29 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75641179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-01DOI: 10.33965/ijcsis_2021160204
Eva Bryer, Theppatorn Rhujittawiwat, J. Rose, Colin Wilder
The goal of this paper is to modify an existing clustering algorithm with the use of the Hunspell spell checker to specialize it for the use of cleaning early modern European book title data. Duplicate and corrupted data is a constant concern for data analysis, and clustering has been identified to be a robust tool for normalizing and cleaning data such as ours. In particular, our data comprises over 5 million books published in European languages between 1500 and 1800 in the Machine-Readable Cataloging (MARC) data format from 17,983 libraries in 123 countries. However, as each library individually catalogued their records, many duplicative and inaccurate records exist in the data set. Additionally, each language evolved over the 300-year period we are studying, and as such many of the words had their spellings altered. Without cleaning and normalizing this data, it would be difficult to find coherent trends, as much of the data may be missed in the query. In previous research, we have identified the use of Prediction by Partial Matching to provide the most increase in base accuracy when applied to dirty data of similar construct to our data set. However, there are many cases in which the correct book title may not be the most common, either when only two values exist in a cluster, or the dirty title exists in more records. In these cases, a language agnostic clustering algorithm would normalize the incorrect title and lower the overall accuracy of the data set. By implementing the Hunspell spell checker into the clustering algorithm, using it to rank clusters by the number of words not found in their dictionary, we can drastically lower the cases of this occurring. Indeed, this ranking algorithm proved to increase the overall accuracy of the clustered data by as much as 25% over the unmodified Prediction by Partial Matching algorithm.
{"title":"IMPROVEMENT OF CLUSTERING ALGORITHMS BY IMPLEMENTATION OF SPELLING BASED RANKING","authors":"Eva Bryer, Theppatorn Rhujittawiwat, J. Rose, Colin Wilder","doi":"10.33965/ijcsis_2021160204","DOIUrl":"https://doi.org/10.33965/ijcsis_2021160204","url":null,"abstract":"The goal of this paper is to modify an existing clustering algorithm with the use of the Hunspell spell checker to specialize it for the use of cleaning early modern European book title data. Duplicate and corrupted data is a constant concern for data analysis, and clustering has been identified to be a robust tool for normalizing and cleaning data such as ours. In particular, our data comprises over 5 million books published in European languages between 1500 and 1800 in the Machine-Readable Cataloging (MARC) data format from 17,983 libraries in 123 countries. However, as each library individually catalogued their records, many duplicative and inaccurate records exist in the data set. Additionally, each language evolved over the 300-year period we are studying, and as such many of the words had their spellings altered. Without cleaning and normalizing this data, it would be difficult to find coherent trends, as much of the data may be missed in the query. In previous research, we have identified the use of Prediction by Partial Matching to provide the most increase in base accuracy when applied to dirty data of similar construct to our data set. However, there are many cases in which the correct book title may not be the most common, either when only two values exist in a cluster, or the dirty title exists in more records. In these cases, a language agnostic clustering algorithm would normalize the incorrect title and lower the overall accuracy of the data set. By implementing the Hunspell spell checker into the clustering algorithm, using it to rank clusters by the number of words not found in their dictionary, we can drastically lower the cases of this occurring. Indeed, this ranking algorithm proved to increase the overall accuracy of the clustered data by as much as 25% over the unmodified Prediction by Partial Matching algorithm.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"33 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76916636","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Now, in Japan, the coastal shipping industry has problems reducing the seafarers and aging. The closed sea areas in central bays and ports in Japan are navigated by many ships. In these waters, an inexperienced ship operator may make a mistake in judgment due to extreme tension, which may cause a marine accident. The authors made a prototype to provide ship operators on board with maneuvering assistance from shore to solve the problems. We developed a prototype by using wireless and mobile communication, VPN, and web browser. We conducted an actual ship experiment in the natural sea as a verification, discussing its effectiveness. It was shown that maneuvering about the same level as onboard as possible. As a result, it was confirmed that this method could be effective by future improvements.
{"title":"DESIGN AND PROTOTYPING OF WEB-BASED SUPPORT FOR SHIP-HANDLING SYSTEM VIA MOBILE WIRELESS COMMUNICATION","authors":"Tsuyoshi Miyashita, Ryota Imai, Masaki Kondo, Tadasuke Furuya","doi":"10.33965/ijcsis_2021160206","DOIUrl":"https://doi.org/10.33965/ijcsis_2021160206","url":null,"abstract":"Now, in Japan, the coastal shipping industry has problems reducing the seafarers and aging. The closed sea areas in central bays and ports in Japan are navigated by many ships. In these waters, an inexperienced ship operator may make a mistake in judgment due to extreme tension, which may cause a marine accident. The authors made a prototype to provide ship operators on board with maneuvering assistance from shore to solve the problems. We developed a prototype by using wireless and mobile communication, VPN, and web browser. We conducted an actual ship experiment in the natural sea as a verification, discussing its effectiveness. It was shown that maneuvering about the same level as onboard as possible. As a result, it was confirmed that this method could be effective by future improvements.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"56 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85519365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-26DOI: 10.33965/ijcsis_2021160101
G. Lagogiannis
In this paper we deal with the dynamic connectivity problem, targeting deterministic worst-case poly-logarithmic time-complexities. First we show that instead of solving the dynamic connectivity problem on a general graph G, it suffices to solve it on a graph we name aligned double-forest that has only 2n-1 edges where n is the number of vertices. Then we present an algorithm that achieves all the operations in logarithmic worst-case time on a graph we name star-tied forest that consists of a star and a forest (of trees), both defined on the same set of vertices. The star-tied forest which can be seen as a special case of an aligned double-forest is more complicated than a forest on which deterministic worst-case logarithmic time-complexities have already been obtained by means of the Dynamic Trees algorithm, introduced by Sleator and Tarjan (1983). For implementing the operations we build upon Dynamic Trees.
{"title":"DYNAMIC CONNECTIVITY: SOME GRAPHS OF INTEREST","authors":"G. Lagogiannis","doi":"10.33965/ijcsis_2021160101","DOIUrl":"https://doi.org/10.33965/ijcsis_2021160101","url":null,"abstract":"In this paper we deal with the dynamic connectivity problem, targeting deterministic worst-case poly-logarithmic time-complexities. First we show that instead of solving the dynamic connectivity problem on a general graph G, it suffices to solve it on a graph we name aligned double-forest that has only 2n-1 edges where n is the number of vertices. Then we present an algorithm that achieves all the operations in logarithmic worst-case time on a graph we name star-tied forest that consists of a star and a forest (of trees), both defined on the same set of vertices. The star-tied forest which can be seen as a special case of an aligned double-forest is more complicated than a forest on which deterministic worst-case logarithmic time-complexities have already been obtained by means of the Dynamic Trees algorithm, introduced by Sleator and Tarjan (1983). For implementing the operations we build upon Dynamic Trees.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"1 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2021-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77955273","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-01-26DOI: 10.33965/ijcsis_2021160106
G. Jabbour, Jason J. Jabbour
The insider threat is a problem that organizations have to deal with. Regardless of its size, mission, or location, any company that uses information systems is potentially vulnerable to insider attacks. Federal agencies, non-governmental organizations, as well as data centers face the same risk of being attacked by an insider. Countering the insider threat is a difficult and daunting task. Organizations concerned with the problem usually train their employees on security-related matters, rules of behavior policies, and the consequences of committing criminal activities. More technically-oriented solutions include enhanced credentialing and access control, and the use of monitoring tools that provide insight into the health and status of systems. This paper addresses the deficiency of widely-used monitoring tools and strategies. It discusses the difference between traditional security approaches and autonomic-based self-protection. The paper then proposes a solution that equips a system with innate self-defense mechanisms that relieve the system from having to rely on human intervention. The paper introduces the Insider Threat Minimization and Mitigation Framework. This framework equips systems with self-defense mechanisms such that a system can instantaneously respond to potential threats and defend itself against users who have unfettered access to it. The framework employs the autonomous demotion of power users’ access privileges based on analysis and evaluation of the user’s risk level. The paper presents the details of the proposed framework and simulates its effectiveness within a data center environment of mission-critical systems.
{"title":"MITIGATING THE INSIDER THREAT TO INFORMATION SYSTEMS USING FULLY EMBEDDED AND INSEPARABLE AUTONOMIC SELF-PROTECTION CAPABILITY","authors":"G. Jabbour, Jason J. Jabbour","doi":"10.33965/ijcsis_2021160106","DOIUrl":"https://doi.org/10.33965/ijcsis_2021160106","url":null,"abstract":"The insider threat is a problem that organizations have to deal with. Regardless of its size, mission, or location, any company that uses information systems is potentially vulnerable to insider attacks. Federal agencies, non-governmental organizations, as well as data centers face the same risk of being attacked by an insider. Countering the insider threat is a difficult and daunting task. Organizations concerned with the problem usually train their employees on security-related matters, rules of behavior policies, and the consequences of committing criminal activities. More technically-oriented solutions include enhanced credentialing and access control, and the use of monitoring tools that provide insight into the health and status of systems. This paper addresses the deficiency of widely-used monitoring tools and strategies. It discusses the difference between traditional security approaches and autonomic-based self-protection. The paper then proposes a solution that equips a system with innate self-defense mechanisms that relieve the system from having to rely on human intervention. The paper introduces the Insider Threat Minimization and Mitigation Framework. This framework equips systems with self-defense mechanisms such that a system can instantaneously respond to potential threats and defend itself against users who have unfettered access to it. The framework employs the autonomous demotion of power users’ access privileges based on analysis and evaluation of the user’s risk level. The paper presents the details of the proposed framework and simulates its effectiveness within a data center environment of mission-critical systems.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"51 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2021-01-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88881081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}