Pub Date : 2019-01-01DOI: 10.33965/ijcsis_2019140202
Mohid Tayyub, G. Khan
Collison detection is a wide-ranging real-world application. It is one of the key components used in gaming, simulation and animation. Efficient algorithms are required for collision detection as it is repeatedly executed throughout the course of an application. Moreover, due to its computationally intensive nature researchers are investigating ways to reduce its execution time. This paper furthers those research works by devising a parallel CPU-GPU implementation of both broad and narrow phase collision detection with heterogenous workload sharing. An important aspect of co-scheduling is to determine an optimal CPU-GPU partition ratio. We also showcase a successive approximation approach for CPU-GPU implementation of collision detection. The paper demonstrates that the framework is not only applicable to CPU/GPU systems but to other system configuration obtaining a peak performance improvement in the range of 18%.
{"title":"HETEROGENEOUS DESIGN AND EFFICIENT CPU-GPU IMPLEMENTATION OF COLLISION DETECTION","authors":"Mohid Tayyub, G. Khan","doi":"10.33965/ijcsis_2019140202","DOIUrl":"https://doi.org/10.33965/ijcsis_2019140202","url":null,"abstract":"Collison detection is a wide-ranging real-world application. It is one of the key components used in gaming, simulation and animation. Efficient algorithms are required for collision detection as it is repeatedly executed throughout the course of an application. Moreover, due to its computationally intensive nature researchers are investigating ways to reduce its execution time. This paper furthers those research works by devising a parallel CPU-GPU implementation of both broad and narrow phase collision detection with heterogenous workload sharing. An important aspect of co-scheduling is to determine an optimal CPU-GPU partition ratio. We also showcase a successive approximation approach for CPU-GPU implementation of collision detection. The paper demonstrates that the framework is not only applicable to CPU/GPU systems but to other system configuration obtaining a peak performance improvement in the range of 18%.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"15 5 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2019-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85395348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-17DOI: 10.33965/IJCSIS_2018130208
Miho Imazaki, Norio Shimozono, N. Komoda
{"title":"Efficient snapshot method for all-flash array","authors":"Miho Imazaki, Norio Shimozono, N. Komoda","doi":"10.33965/IJCSIS_2018130208","DOIUrl":"https://doi.org/10.33965/IJCSIS_2018130208","url":null,"abstract":"","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"121 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2018-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75491833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-17DOI: 10.33965/IJCSIS_2018130206
Smail Sellah, V. Hilaire
In the context of globalization, companies need to capitalize on their knowledge. The knowledge of a company is present in two forms tacit and explicit. Explicit knowledge represents all formalized information i.e all documents (pdf, words ...). Tacit knowledge is present in documents and mind of employees, this kind of knowledge is not formalized, it needs a reasoning process to discover it. The approach proposed focus on extracting tacit knowledge from textual documents. In this paper, we propose hierarchical word clustering as an improvement of word clusters generated in previous work, we also proposed an approach to extract relevant bigrams and trigrams. We use Reuters-21578 corpus to validate our approach. Our global work aims to ease the automatic building of ontologies.
{"title":"Automatic generation of ontologies: a hierarchical word clustering approach","authors":"Smail Sellah, V. Hilaire","doi":"10.33965/IJCSIS_2018130206","DOIUrl":"https://doi.org/10.33965/IJCSIS_2018130206","url":null,"abstract":"In the context of globalization, companies need to capitalize on their knowledge. The knowledge of a company is present in two forms tacit and explicit. Explicit knowledge represents all formalized information i.e all documents (pdf, words ...). Tacit knowledge is present in documents and mind of employees, this kind of knowledge is not formalized, it needs a reasoning process to discover it. The approach proposed focus on extracting tacit knowledge from textual documents. In this paper, we propose hierarchical word clustering as an improvement of word clusters generated in previous work, we also proposed an approach to extract relevant bigrams and trigrams. We use Reuters-21578 corpus to validate our approach. Our global work aims to ease the automatic building of ontologies.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"73 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2018-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84286169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-17DOI: 10.33965/IJCSIS_2018130203
Felix Sanchez-Puchol, J. Pastor-Collado, Baptista Borrell
Enterprise Reference Architectures (ERAs) and Reference Models (RMs) have emerged over the last years as relevant instruments for improving the quality and effectiveness of enterprise architecture (EA) practice. Whilst a wide variety of different ERAs and RMs have been proposed for different industries and types of business, only few of them have been devoted to the Higher Education (HE) sector. In this paper, we propose an in-depth analysis process which we then critically apply to review, compare and classify 20 existing ERAs and RMs targeted to the HE domain. Our process uses a common set of 12 definitional attributes. In so doing, we contribute to the existing body of knowledge by providing a unified, structured and comprehensive analysis process and catalog of these abstract EA artifacts. With this we aim to create awareness on their potential practical utility and to increase their visibility, transparency and opportunity for their reusability by different HE stakeholders. Hence, the proposed process and catalog is expected to be useful both for practitioners and researchers by providing a panoramic view of more or less ready-to-use existing ERAs and RMs for HE, as well as a structure way to regard them. Moreover, and by specifying their main scope, coverage and extend of knowledge captured, the process and catalog might become a valuable tool for providing guidance to HE stakeholders on making better-informed decisions on the selection of suitable architectural artifacts for being conveniently adapted or applied in different EA practices conducted at their respective institutions.
{"title":"First in-depth analysis of enterprise architectures and models for higher education institutions","authors":"Felix Sanchez-Puchol, J. Pastor-Collado, Baptista Borrell","doi":"10.33965/IJCSIS_2018130203","DOIUrl":"https://doi.org/10.33965/IJCSIS_2018130203","url":null,"abstract":"Enterprise Reference Architectures (ERAs) and Reference Models (RMs) have emerged over the last years as relevant instruments for improving the quality and effectiveness of enterprise architecture (EA) practice. Whilst a wide variety of different ERAs and RMs have been proposed for different industries and types of business, only few of them have been devoted to the Higher Education (HE) sector. In this paper, we propose an in-depth analysis process which we then critically apply to review, compare and classify 20 existing ERAs and RMs targeted to the HE domain. Our process uses a common set of 12 definitional attributes. In so doing, we contribute to the existing body of knowledge by providing a unified, structured and comprehensive analysis process and catalog of these abstract EA artifacts. With this we aim to create awareness on their potential practical utility and to increase their visibility, transparency and opportunity for their reusability by different HE stakeholders. Hence, the proposed process and catalog is expected to be useful both for practitioners and researchers by providing a panoramic view of more or less ready-to-use existing ERAs and RMs for HE, as well as a structure way to regard them. Moreover, and by specifying their main scope, coverage and extend of knowledge captured, the process and catalog might become a valuable tool for providing guidance to HE stakeholders on making better-informed decisions on the selection of suitable architectural artifacts for being conveniently adapted or applied in different EA practices conducted at their respective institutions.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"41 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2018-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85222434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-17DOI: 10.33965/IJCSIS_2018130201
J. Gibbs
The task of color grading (or color correction) for film and video is significant and complex, involving aesthetic and technical decisions that require a trained operator and a good deal of time. In order to determine whether deep neural networks are capable of learning this complex aesthetic task, we compare two network frameworks—a classification network, and a conditional generative adversarial network, or cGAN—examining the quality and consistency of their output as potential automated solutions to color correction. Results are very good for both networks, though each exhibits problem areas. The classification network has issues with generalizing due to the need to collect and especially to label all data being used to train it. The cGAN on the other hand can use unlabeled data, which is much easier to collect. While the classification network does not directly affect images, only identifying image problems, the cGAN, creates a new image, introducing potential image degradation in the process; thus multiple adjustments to the network need to be made to create high quality output. We find that the data labeling issue for the classification network is a less tractable problem than the image correction and continuity issues discovered with the cGAN method, which have direct solutions. Thus we conclude the cGAN is the more promising network with which to automate color correction and grading.
{"title":"Video color grading via deep neural networks","authors":"J. Gibbs","doi":"10.33965/IJCSIS_2018130201","DOIUrl":"https://doi.org/10.33965/IJCSIS_2018130201","url":null,"abstract":"The task of color grading (or color correction) for film and video is significant and complex, involving aesthetic and technical decisions that require a trained operator and a good deal of time. In order to determine whether deep neural networks are capable of learning this complex aesthetic task, we compare two network frameworks—a classification network, and a conditional generative adversarial network, or cGAN—examining the quality and consistency of their output as potential automated solutions to color correction. Results are very good for both networks, though each exhibits problem areas. The classification network has issues with generalizing due to the need to collect and especially to label all data being used to train it. The cGAN on the other hand can use unlabeled data, which is much easier to collect. While the classification network does not directly affect images, only identifying image problems, the cGAN, creates a new image, introducing potential image degradation in the process; thus multiple adjustments to the network need to be made to create high quality output. We find that the data labeling issue for the classification network is a less tractable problem than the image correction and continuity issues discovered with the cGAN method, which have direct solutions. Thus we conclude the cGAN is the more promising network with which to automate color correction and grading.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"44 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2018-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72511810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-17DOI: 10.33965/ijcsis_2018130202
Toshihiko Kato, M. Moriyama, R. Yamamoto, S. Ohzahata
This paper describes the performance evaluation of the well-known spurious timeout detection methods implemented within TCP, Eifel, DSACK, and F-RTO, through experiments with the network emulator emulating handovers over LTE (Long Term Evolution) networks. Specifically, the emulator supports to insert the time-variant delay and packet loss in TCP streams. By taking account of the lossless handover in LTE, this paper shows the results for the cases only the delay spike is inserted, and both delay spike and packet loss are inserted. In the former case, the three methods show the similar performance, but in the latter case, the performance of Eifel is worse than the others. This paper also shows the results when two methods are used together for the delay spike and packet loss case, and indicates that the performance is not improved even if multiple spurious timeout detection methods are implemented.
{"title":"Performance evaluation of tcp spurious timeout detection methods under delay spike and packet loss emulating lte handover","authors":"Toshihiko Kato, M. Moriyama, R. Yamamoto, S. Ohzahata","doi":"10.33965/ijcsis_2018130202","DOIUrl":"https://doi.org/10.33965/ijcsis_2018130202","url":null,"abstract":"This paper describes the performance evaluation of the well-known spurious timeout detection methods implemented within TCP, Eifel, DSACK, and F-RTO, through experiments with the network emulator emulating handovers over LTE (Long Term Evolution) networks. Specifically, the emulator supports to insert the time-variant delay and packet loss in TCP streams. By taking account of the lossless handover in LTE, this paper shows the results for the cases only the delay spike is inserted, and both delay spike and packet loss are inserted. In the former case, the three methods show the similar performance, but in the latter case, the performance of Eifel is worse than the others. This paper also shows the results when two methods are used together for the delay spike and packet loss case, and indicates that the performance is not improved even if multiple spurious timeout detection methods are implemented.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"199 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2018-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77262995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-17DOI: 10.33965/IJCSIS_2018130204
Georg Zitzlsberger, B. Jansik, J. Martinovič
For large-scale High Performance Computing centers with a wide range of different projects and heterogeneous infrastructures, efficiency is an important consideration. Understanding how compute jobs are scheduled is necessary for improving the job scheduling strategies in order to optimize cluster utilization and job wait times. This increases the importance of a reliable simulation capability, which in turn requires accuracy and comparability with historic workloads from the cluster. Not all job schedulers have a simulation capability, including the Portable Batch System (PBS) resource manager. Hence, PBS based centers have no direct way to simulate changes and optimizations before they are applied to the production system. We propose and discuss how to run job simulations for large-scale PBS based clusters with the Maui Scheduler. This also includes awareness of node downtimes, scheduled and unexpected. For validation purposes, we use historic workloads collected at the IT4Innovations supercomputing center. The viability of our approach is demonstrated by measuring the accuracy of the simulation results compared to the real workloads. In addition, we discuss how the change of the simulator’s time step resolution affects the accuracy as well as simulation times. We are confident that our approach is also transferable to enable job simulations for other computing centers using PBS.
{"title":"Feasibility analysis of using the maui scheduler for job simulation of large-scale pbs based clusters","authors":"Georg Zitzlsberger, B. Jansik, J. Martinovič","doi":"10.33965/IJCSIS_2018130204","DOIUrl":"https://doi.org/10.33965/IJCSIS_2018130204","url":null,"abstract":"For large-scale High Performance Computing centers with a wide range of different projects and heterogeneous infrastructures, efficiency is an important consideration. Understanding how compute jobs are scheduled is necessary for improving the job scheduling strategies in order to optimize cluster utilization and job wait times. This increases the importance of a reliable simulation capability, which in turn requires accuracy and comparability with historic workloads from the cluster. Not all job schedulers have a simulation capability, including the Portable Batch System (PBS) resource manager. Hence, PBS based centers have no direct way to simulate changes and optimizations before they are applied to the production system. We propose and discuss how to run job simulations for large-scale PBS based clusters with the Maui Scheduler. This also includes awareness of node downtimes, scheduled and unexpected. For validation purposes, we use historic workloads collected at the IT4Innovations supercomputing center. The viability of our approach is demonstrated by measuring the accuracy of the simulation results compared to the real workloads. In addition, we discuss how the change of the simulator’s time step resolution affects the accuracy as well as simulation times. We are confident that our approach is also transferable to enable job simulations for other computing centers using PBS.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"87 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2018-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90653987","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-17DOI: 10.33965/IJCSIS_2018130205
André Langer, Christoph Göpfert, M. Gaedke
Using appropriate entity URIs is a crucial factor for the success of semantic-enabled applications for data management and data retrieval. Especially data applications that collect data to build knowledge graphs rely on correct concept identifiers in favor of ambiguous literals. This collection involves human interaction in the web frontend without annoying the user. But appropriate user interfaces for this task are still a challenge. In this article, we focus on the design of form elements that unobtrusively allow input data both for human and machine interaction from a semantic point of view. Motivated by web-based scholarly document-submission systems, we first present a brief current-state analysis on the support of semantic input operations, investigate how these users input interfaces can be improved for concept linking purposes with an auto-suggestion behavior and finally evaluate with a proof-of concept implementation and user survey the advantages and acceptance of our approach.
{"title":"Uri-aware user input interfaces for the unobtrusive reference to linked data","authors":"André Langer, Christoph Göpfert, M. Gaedke","doi":"10.33965/IJCSIS_2018130205","DOIUrl":"https://doi.org/10.33965/IJCSIS_2018130205","url":null,"abstract":"Using appropriate entity URIs is a crucial factor for the success of semantic-enabled applications for data management and data retrieval. Especially data applications that collect data to build knowledge graphs rely on correct concept identifiers in favor of ambiguous literals. This collection involves human interaction in the web frontend without annoying the user. But appropriate user interfaces for this task are still a challenge. In this article, we focus on the design of form elements that unobtrusively allow input data both for human and machine interaction from a semantic point of view. Motivated by web-based scholarly document-submission systems, we first present a brief current-state analysis on the support of semantic input operations, investigate how these users input interfaces can be improved for concept linking purposes with an auto-suggestion behavior and finally evaluate with a proof-of concept implementation and user survey the advantages and acceptance of our approach.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"136 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2018-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86832066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-12-17DOI: 10.33965/ijcsis_2018130207
Mathew Nicho, H. Fakhry, Uche Egbue
Spear phishing emails pose great danger to employees of organizations due to the inherent weakness of the employees in identifying the threat from spear phishing cues, as well as the spear phisher’s skill in crafting contextually convincing emails. This raises the main question of which construct (user vulnerabilities or phisher skills) has a greater influence on the vulnerable user. Researchers have provided enough evidence of user vulnerabilities, namely the desire for monetary gain, curiosity of the computer user, carelessness on the part of the user, the trust placed in the purported sender by the user, and a lack of awareness on the part of the computer user. However, there is a lack of research on the magnitude of each of these factors in influencing an unsuspecting user to fall for a phishing or spear phishing attack which we explored in this paper. While user vulnerabilities pose major risk, the effect of the spear phisher’s ability in skillfully crafting convincing emails (using fear appeals, urgency of action, and email contextualization) to trap even skillful IT security personnel is an area that needs to be explored. Therefore, we explored the relationships between the two major constructs namely ‘user vulnerabilities’ and ‘email contextualization’, through the theory of planned behavior with the objective to find out the major factors that lead to computer users biting the phishers’ bait. In this theoretical version of the paper, we provided the resulting two constructs that needed to be tested.
{"title":"Evaluating user vulnerabilities vs phisher skills in spear phishing","authors":"Mathew Nicho, H. Fakhry, Uche Egbue","doi":"10.33965/ijcsis_2018130207","DOIUrl":"https://doi.org/10.33965/ijcsis_2018130207","url":null,"abstract":"Spear phishing emails pose great danger to employees of organizations due to the inherent weakness of the employees in identifying the threat from spear phishing cues, as well as the spear phisher’s skill in crafting contextually convincing emails. This raises the main question of which construct (user vulnerabilities or phisher skills) has a greater influence on the vulnerable user. Researchers have provided enough evidence of user vulnerabilities, namely the desire for monetary gain, curiosity of the computer user, carelessness on the part of the user, the trust placed in the purported sender by the user, and a lack of awareness on the part of the computer user. However, there is a lack of research on the magnitude of each of these factors in influencing an unsuspecting user to fall for a phishing or spear phishing attack which we explored in this paper. While user vulnerabilities pose major risk, the effect of the spear phisher’s ability in skillfully crafting convincing emails (using fear appeals, urgency of action, and email contextualization) to trap even skillful IT security personnel is an area that needs to be explored. Therefore, we explored the relationships between the two major constructs namely ‘user vulnerabilities’ and ‘email contextualization’, through the theory of planned behavior with the objective to find out the major factors that lead to computer users biting the phishers’ bait. In this theoretical version of the paper, we provided the resulting two constructs that needed to be tested.","PeriodicalId":41878,"journal":{"name":"IADIS-International Journal on Computer Science and Information Systems","volume":"12 2 1","pages":""},"PeriodicalIF":0.2,"publicationDate":"2018-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90172557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}