Pub Date : 2020-03-01DOI: 10.1109/ICICT50521.2020.00036
Mohammed A. Gharawi, Hashim H. Alneami
This study addresses the organizational factors influencing cross-boundary information sharing (CBIS) initiatives within the context of Saudi Arabia (SA). The study starts by synthesizing the pertinent literature toward implementing an integrated model for the organizational factors influencing CBIS. A qualitative research approach was used to guide the research and the data was collected using interviews and documentation. The study shows that the adoption of the Government Secure Bus (GSB), implemented to facilitate information sharing between government agencies in SA, is influenced by nine factors identified by previous research. These factors include goals and interests of participating organizations, trust, executive support, risks, costs, benefits, authority and hierarchical structures, organizational culture, and leadership. Additionally, the study pointed to three additional factors that influence GSB adoption. The additional factors include mimetic pressures, e-government transformation measurement, and organizations’ perception of data quality.
{"title":"Exploring the Influence of Organizational Context on Cross-boundary Information-Sharing Initiatives: The Case of the Saudi’s Government Secure Bus","authors":"Mohammed A. Gharawi, Hashim H. Alneami","doi":"10.1109/ICICT50521.2020.00036","DOIUrl":"https://doi.org/10.1109/ICICT50521.2020.00036","url":null,"abstract":"This study addresses the organizational factors influencing cross-boundary information sharing (CBIS) initiatives within the context of Saudi Arabia (SA). The study starts by synthesizing the pertinent literature toward implementing an integrated model for the organizational factors influencing CBIS. A qualitative research approach was used to guide the research and the data was collected using interviews and documentation. The study shows that the adoption of the Government Secure Bus (GSB), implemented to facilitate information sharing between government agencies in SA, is influenced by nine factors identified by previous research. These factors include goals and interests of participating organizations, trust, executive support, risks, costs, benefits, authority and hierarchical structures, organizational culture, and leadership. Additionally, the study pointed to three additional factors that influence GSB adoption. The additional factors include mimetic pressures, e-government transformation measurement, and organizations’ perception of data quality.","PeriodicalId":445000,"journal":{"name":"2020 3rd International Conference on Information and Computer Technologies (ICICT)","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126560531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.1109/ICICT50521.2020.00070
O. A. Adeleke
Computers have become ubiquitous. They now control some of our most complicated industrial systems and they are also used in virtually every field of human endeavor today, to store, process and transfer sensitive data. However, this increased popularity of computers and internet, has also increased the severity and consequences of possible intrusion by malicious actors. Furthermore, the level of sophistication of intrusive attacks seen, continues to increase at a very high rate, making it more difficult for organizations to stay ahead of attackers. These factors make the study of intrusion detection very important. Therefore, in this paper, we carry out a detail study of intrusion detection, discussing popular attacks, examining problems associated with their detection and exploring possible solutions.
{"title":"Intrusion Detection: Issues, Problems and Solutions","authors":"O. A. Adeleke","doi":"10.1109/ICICT50521.2020.00070","DOIUrl":"https://doi.org/10.1109/ICICT50521.2020.00070","url":null,"abstract":"Computers have become ubiquitous. They now control some of our most complicated industrial systems and they are also used in virtually every field of human endeavor today, to store, process and transfer sensitive data. However, this increased popularity of computers and internet, has also increased the severity and consequences of possible intrusion by malicious actors. Furthermore, the level of sophistication of intrusive attacks seen, continues to increase at a very high rate, making it more difficult for organizations to stay ahead of attackers. These factors make the study of intrusion detection very important. Therefore, in this paper, we carry out a detail study of intrusion detection, discussing popular attacks, examining problems associated with their detection and exploring possible solutions.","PeriodicalId":445000,"journal":{"name":"2020 3rd International Conference on Information and Computer Technologies (ICICT)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131370844","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Forecasting the exchange rates is a serious issue that is getting expanding consideration particularly as a result of its trouble and pragmatic applications. Artificial neural networks (ANNs) have been generally utilized as a promising elective methodology for an anticipating task as a result of a few recognized highlights. Research endeavors on ANNs for gauging exchange rates are extensive. In this paper, we endeavor to give a review of research around there. A few structure factors fundamentally sway the exactness of neural network gauges. These elements incorporate the determination of information factors, getting ready information, and network design. There is no accord about the components. In various cases, different choices have their own adequacy. We additionally depict the combination of ANNs with different strategies and report the correlation between exhibitions of ANNs also, those of other anticipating techniques, and finding blended outcomes. At long last, what's to come inquire about headings around there are examined. This paper presents the forecast of top exchanged monetary utilizing diverse Machine learning models which incorporate top foreign exchange (Forex) monetary standards utilizing a hybrid comparison of Support Vector Regressor (SVR) and Artificial Neural Network (ANN), Short-Term Memory (STM), and Neural Network with Hidden Layers. They anticipate the exchange rate between world's top exchanged monetary forms, for example, USD/PKR, from information by day, 30-39 years till December 2018.
{"title":"Short-Term Prediction Model for Multi-currency Exchange Using Artificial Neural Network","authors":"Isha Zameer Memon, Shahnawaz Talpur, Sanam Narejo, Aisha Zahid Junejo, Engr. Fawwad Hassan","doi":"10.1109/ICICT50521.2020.00024","DOIUrl":"https://doi.org/10.1109/ICICT50521.2020.00024","url":null,"abstract":"Forecasting the exchange rates is a serious issue that is getting expanding consideration particularly as a result of its trouble and pragmatic applications. Artificial neural networks (ANNs) have been generally utilized as a promising elective methodology for an anticipating task as a result of a few recognized highlights. Research endeavors on ANNs for gauging exchange rates are extensive. In this paper, we endeavor to give a review of research around there. A few structure factors fundamentally sway the exactness of neural network gauges. These elements incorporate the determination of information factors, getting ready information, and network design. There is no accord about the components. In various cases, different choices have their own adequacy. We additionally depict the combination of ANNs with different strategies and report the correlation between exhibitions of ANNs also, those of other anticipating techniques, and finding blended outcomes. At long last, what's to come inquire about headings around there are examined. This paper presents the forecast of top exchanged monetary utilizing diverse Machine learning models which incorporate top foreign exchange (Forex) monetary standards utilizing a hybrid comparison of Support Vector Regressor (SVR) and Artificial Neural Network (ANN), Short-Term Memory (STM), and Neural Network with Hidden Layers. They anticipate the exchange rate between world's top exchanged monetary forms, for example, USD/PKR, from information by day, 30-39 years till December 2018.","PeriodicalId":445000,"journal":{"name":"2020 3rd International Conference on Information and Computer Technologies (ICICT)","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134216552","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.1109/ICICT50521.2020.00023
Hee-Cheon Park, Joshus DeNio, Jeongyun Choi, Hanku Lee
For the last two decades, Python has become one of the most popular programming languages and been used to develop and analyze data-intensive scientific and engineering applications and in the areas such as Bigdata Analytics, Social Media, Data Science, Physics, Psychology, Healthcare, Political Science, etc. Moreover, demand of supporting Python data-parallel applications for those areas is rapidly growing. There have been international efforts to produce a message passing interface for Python bindings to support parallel computing, but specific challenges still remain to improve Python bindings. The main purpose of this paper is to introduce our MPI Python binding, called mpiPython, with the MPI standard communication API. In this paper, we first will discuss the design issues of the mpiPython API, associated with its development. In the second part of the paper, we will discuss node/parallel performance to compare mpiPython to other MPI bindings on a Linux cluster and can expect mpiPython achieves quite acceptable performance.
{"title":"mpiPython: A Robust Python MPI Binding","authors":"Hee-Cheon Park, Joshus DeNio, Jeongyun Choi, Hanku Lee","doi":"10.1109/ICICT50521.2020.00023","DOIUrl":"https://doi.org/10.1109/ICICT50521.2020.00023","url":null,"abstract":"For the last two decades, Python has become one of the most popular programming languages and been used to develop and analyze data-intensive scientific and engineering applications and in the areas such as Bigdata Analytics, Social Media, Data Science, Physics, Psychology, Healthcare, Political Science, etc. Moreover, demand of supporting Python data-parallel applications for those areas is rapidly growing. There have been international efforts to produce a message passing interface for Python bindings to support parallel computing, but specific challenges still remain to improve Python bindings. The main purpose of this paper is to introduce our MPI Python binding, called mpiPython, with the MPI standard communication API. In this paper, we first will discuss the design issues of the mpiPython API, associated with its development. In the second part of the paper, we will discuss node/parallel performance to compare mpiPython to other MPI bindings on a Linux cluster and can expect mpiPython achieves quite acceptable performance.","PeriodicalId":445000,"journal":{"name":"2020 3rd International Conference on Information and Computer Technologies (ICICT)","volume":"94 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134435268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.1109/ICICT50521.2020.00027
Alaa Darabseh, Doyel Pal
Authentication is the process of verifying the identity of a user. Biometric authentication assures user identity by identifying users physiological or behavioral traits. Keystroke dynamics is a behavioral biometric based on users typing pattern. It can be used to authenticate legitimate users based on their unique typing style on the keyboard. From a pattern recognition point of view, user authentication using keystroke dynamics is a challenging task. It can be accomplished by using classification algorithms - two-class and one-class classification algorithms. In this paper, we study and evaluate the effectiveness of using the one-class classification algorithms over the two-class classification algorithms for keystroke dynamics authentication system. We implemented and evaluated 18 classification algorithms (both two-class and one-class) from the literature of keystroke dynamics and pattern recognition. The result of our experiments is evaluated using 28 subjects with the total of 378 unique comparisons for each classifier. Our results show that the top-performing classifiers of one-class are not very different from two-class classifiers and can be considered to use in the real-world authentication systems.
{"title":"Performance Analysis of Keystroke Dynamics Using Classification Algorithms","authors":"Alaa Darabseh, Doyel Pal","doi":"10.1109/ICICT50521.2020.00027","DOIUrl":"https://doi.org/10.1109/ICICT50521.2020.00027","url":null,"abstract":"Authentication is the process of verifying the identity of a user. Biometric authentication assures user identity by identifying users physiological or behavioral traits. Keystroke dynamics is a behavioral biometric based on users typing pattern. It can be used to authenticate legitimate users based on their unique typing style on the keyboard. From a pattern recognition point of view, user authentication using keystroke dynamics is a challenging task. It can be accomplished by using classification algorithms - two-class and one-class classification algorithms. In this paper, we study and evaluate the effectiveness of using the one-class classification algorithms over the two-class classification algorithms for keystroke dynamics authentication system. We implemented and evaluated 18 classification algorithms (both two-class and one-class) from the literature of keystroke dynamics and pattern recognition. The result of our experiments is evaluated using 28 subjects with the total of 378 unique comparisons for each classifier. Our results show that the top-performing classifiers of one-class are not very different from two-class classifiers and can be considered to use in the real-world authentication systems.","PeriodicalId":445000,"journal":{"name":"2020 3rd International Conference on Information and Computer Technologies (ICICT)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133888892","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.1109/ICICT50521.2020.00060
Afzal Hussain Shahid, M. Singh, Bishwajit Roy, Aashish Aadarsh
Coronary artery disease (CAD) is the most common cardiovascular disease (CVD) that cause millions of deaths worldwide due to heart failure, heart attack, and angina. The symptoms of the CAD do not appear in the early stage of the disease and it causes deadly conditions; therefore, accurate and early diagnosis of CAD is necessary to take appropriate and timely action for preventing or minimizing such conditions. Angiography, being the most accurate method for diagnosis of CAD, is often used by the clinicians to diagnose the CAD but this is an invasive procedure, costly, and may cause side effects. Therefore, researchers are trying to develop alternative diagnostic modalities for the efficient diagnosis of CAD. To that end, machine learning and data mining techniques have been widely employed. This paper proposes and develops hybrid Particle swarm optimization based Extreme learning machine (PSO-ELM) for diagnosis of CAD using the publicly available Z-Alizadeh sani dataset. To enhance the performance of the proposed model, a feature selection algorithm, namely Fisher, is used to find more discriminative feature subset. In the training period, the PSO algorithm is used to calibrate the ELM input weights and hidden biases. Further, the performance of the proposed model is compared with the basic ELM in terms of accuracy, Pearson correlation coefficient (R2) and Root mean square error (RMSE) goodness-of-fit functions. The results show that the performance of the proposed model is better than the basic ELM. The obtained CAD classification performance in terms of sensitivity, accuracy, specificity, and F1-measure is competitive to the known approaches in the literature.
{"title":"Coronary Artery Disease Diagnosis Using Feature Selection Based Hybrid Extreme Learning Machine","authors":"Afzal Hussain Shahid, M. Singh, Bishwajit Roy, Aashish Aadarsh","doi":"10.1109/ICICT50521.2020.00060","DOIUrl":"https://doi.org/10.1109/ICICT50521.2020.00060","url":null,"abstract":"Coronary artery disease (CAD) is the most common cardiovascular disease (CVD) that cause millions of deaths worldwide due to heart failure, heart attack, and angina. The symptoms of the CAD do not appear in the early stage of the disease and it causes deadly conditions; therefore, accurate and early diagnosis of CAD is necessary to take appropriate and timely action for preventing or minimizing such conditions. Angiography, being the most accurate method for diagnosis of CAD, is often used by the clinicians to diagnose the CAD but this is an invasive procedure, costly, and may cause side effects. Therefore, researchers are trying to develop alternative diagnostic modalities for the efficient diagnosis of CAD. To that end, machine learning and data mining techniques have been widely employed. This paper proposes and develops hybrid Particle swarm optimization based Extreme learning machine (PSO-ELM) for diagnosis of CAD using the publicly available Z-Alizadeh sani dataset. To enhance the performance of the proposed model, a feature selection algorithm, namely Fisher, is used to find more discriminative feature subset. In the training period, the PSO algorithm is used to calibrate the ELM input weights and hidden biases. Further, the performance of the proposed model is compared with the basic ELM in terms of accuracy, Pearson correlation coefficient (R2) and Root mean square error (RMSE) goodness-of-fit functions. The results show that the performance of the proposed model is better than the basic ELM. The obtained CAD classification performance in terms of sensitivity, accuracy, specificity, and F1-measure is competitive to the known approaches in the literature.","PeriodicalId":445000,"journal":{"name":"2020 3rd International Conference on Information and Computer Technologies (ICICT)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121421542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.1109/ICICT50521.2020.00041
Mario Barcelo-Valenzuela, Carlos Maximiliano Leal-Pompa, Gerardo Sanchez-Schmitz
The document presents an adaptation of the Information Technology Infrastructure Library (ITIL) standards and best practices provided from its service life cycle in the IT department (ITD) of a Local Electoral Public Institution (LEPI) in Mexico. These type of autonomous and public organizations are in charge of organizing elections in each state. The ITD is in charge of IT problem solving and improvement projects related to elections and other needs of the organization. Most of the work is generated from user needs as they arise, but there's a lack of Standard Operating Procedures (SOPs) which prevents continuous monitoring, improvement and follow through. The main challenge to providing quality IT services in Electoral Institutions of this nature, is their adaptability to unexpected changes that occur in regulations. Service requests are caused by untimely changes arising from agreements and resolutions of Institutes and Court; this creates a work environment in which sudden decision-making is encouraged in terms of IT solutions management, which is why having an IT strategy that aligns with the strategic objectives of LEPI is crucial to guarantee a continuous improvement in the local democratic functioning. There is limited research on the application of standards and best practices in IT services for the public sector in Mexico, this methodology can be implemented in each of the 32 IT departments within the country.
{"title":"An IT Service Management Methodology for an Electoral Public Institution","authors":"Mario Barcelo-Valenzuela, Carlos Maximiliano Leal-Pompa, Gerardo Sanchez-Schmitz","doi":"10.1109/ICICT50521.2020.00041","DOIUrl":"https://doi.org/10.1109/ICICT50521.2020.00041","url":null,"abstract":"The document presents an adaptation of the Information Technology Infrastructure Library (ITIL) standards and best practices provided from its service life cycle in the IT department (ITD) of a Local Electoral Public Institution (LEPI) in Mexico. These type of autonomous and public organizations are in charge of organizing elections in each state. The ITD is in charge of IT problem solving and improvement projects related to elections and other needs of the organization. Most of the work is generated from user needs as they arise, but there's a lack of Standard Operating Procedures (SOPs) which prevents continuous monitoring, improvement and follow through. The main challenge to providing quality IT services in Electoral Institutions of this nature, is their adaptability to unexpected changes that occur in regulations. Service requests are caused by untimely changes arising from agreements and resolutions of Institutes and Court; this creates a work environment in which sudden decision-making is encouraged in terms of IT solutions management, which is why having an IT strategy that aligns with the strategic objectives of LEPI is crucial to guarantee a continuous improvement in the local democratic functioning. There is limited research on the application of standards and best practices in IT services for the public sector in Mexico, this methodology can be implemented in each of the 32 IT departments within the country.","PeriodicalId":445000,"journal":{"name":"2020 3rd International Conference on Information and Computer Technologies (ICICT)","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123991953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.1109/ICICT50521.2020.00018
K. Ghane
The existing big data platforms take data through distributed processing platforms and store them in a data lake. The architectures such as Lambda and Kappa address the real-time and batch processing of data. Such systems provide real time analytics on the raw data and delayed analytics on the curated data. The data denormalization, creation and maintenance of a columnar dimensional data warehouse is usually time consuming with no or limited support for unstructured data. The system introduced in this paper automatically creates and dynamically maintains its data warehouse as a part of its big data pipeline in addition to its data lake. It creates its data warehouse on structured, semi-structured and unstructured data. It uses Machine Learning to identify and create dimensions. It also establishes relations among data from different data sources and creates the corresponding dimensions. It dynamically optimizes the dimensions based on the crowd sourced data provided by end users and also based on query analysis.
{"title":"Big Data Pipeline with ML-Based and Crowd Sourced Dynamically Created and Maintained Columnar Data Warehouse for Structured and Unstructured Big Data","authors":"K. Ghane","doi":"10.1109/ICICT50521.2020.00018","DOIUrl":"https://doi.org/10.1109/ICICT50521.2020.00018","url":null,"abstract":"The existing big data platforms take data through distributed processing platforms and store them in a data lake. The architectures such as Lambda and Kappa address the real-time and batch processing of data. Such systems provide real time analytics on the raw data and delayed analytics on the curated data. The data denormalization, creation and maintenance of a columnar dimensional data warehouse is usually time consuming with no or limited support for unstructured data. The system introduced in this paper automatically creates and dynamically maintains its data warehouse as a part of its big data pipeline in addition to its data lake. It creates its data warehouse on structured, semi-structured and unstructured data. It uses Machine Learning to identify and create dimensions. It also establishes relations among data from different data sources and creates the corresponding dimensions. It dynamically optimizes the dimensions based on the crowd sourced data provided by end users and also based on query analysis.","PeriodicalId":445000,"journal":{"name":"2020 3rd International Conference on Information and Computer Technologies (ICICT)","volume":"250 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124751769","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.1109/ICICT50521.2020.00031
Ajay Koul, Apeksha Koul
The images in the scene consist of several objects that depict different relationships among themselves. Interpretation understands those relationships. Thus, scene interpretation is a scene description in which the scene models are consistent with the evidence, context information, and world knowledge. On one side, images in scene interpretation are useful in extracting the information that is related to the physical world and is meant for human operators. On the other side, it has always constituted a great challenge because of the varieties of complex objects due to which computer vision is not much capable to comprehend the information regarding the images in the scene. It also requires many efforts to extract the deeper meaning of the scene. So in review paper, we are going to summarize the methodology proposed by various researchers in terms of semantic segmentation and contextual information to interpret the images and highlight their contributions and challenges which still persist. Our analysis of the different methods proposed is also provided to draw some conclusions.
{"title":"Semantic Segmentation and Contextual Information Based Image Scene Interpretation: A Review","authors":"Ajay Koul, Apeksha Koul","doi":"10.1109/ICICT50521.2020.00031","DOIUrl":"https://doi.org/10.1109/ICICT50521.2020.00031","url":null,"abstract":"The images in the scene consist of several objects that depict different relationships among themselves. Interpretation understands those relationships. Thus, scene interpretation is a scene description in which the scene models are consistent with the evidence, context information, and world knowledge. On one side, images in scene interpretation are useful in extracting the information that is related to the physical world and is meant for human operators. On the other side, it has always constituted a great challenge because of the varieties of complex objects due to which computer vision is not much capable to comprehend the information regarding the images in the scene. It also requires many efforts to extract the deeper meaning of the scene. So in review paper, we are going to summarize the methodology proposed by various researchers in terms of semantic segmentation and contextual information to interpret the images and highlight their contributions and challenges which still persist. Our analysis of the different methods proposed is also provided to draw some conclusions.","PeriodicalId":445000,"journal":{"name":"2020 3rd International Conference on Information and Computer Technologies (ICICT)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125454215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-03-01DOI: 10.1109/ICICT50521.2020.00038
T. Alade, Ruel Welch, Andrew Robinson, Lynn Nichol
The use of mobile devices in an educational context to support learning has drawn considerable attention, however, there is relatively little systematic knowledge about how it can be used effectively as a knowledge acquisition tool in workplace environments. This paper proposes mobile learning (m-learning) as a just-in-time learning tool to support and manage ICT problem related calls in a Science Museum (SM). Employees' intention to use m-learning is investigated using the Unified Theory of Acceptance and Use of Technology (UTAUT) model. Selected UTAUT factors including performance expectancy, effort expectancy, social influence and facilitating conditions are analysed to explain the determinants of m-learning adoption at the SM. Results demonstrate that the selected UTAUT factors had a significant impact on employee's behavioral intention to use m-learning at the SM. Further examination found age and gender moderate the relationship between the UTAUT factors. These findings present several useful implications for m-learning research and practice for ICT service desks.
{"title":"Mobile Learning for Just-in-Time Knowledge Acquisition in a Workplace Environment","authors":"T. Alade, Ruel Welch, Andrew Robinson, Lynn Nichol","doi":"10.1109/ICICT50521.2020.00038","DOIUrl":"https://doi.org/10.1109/ICICT50521.2020.00038","url":null,"abstract":"The use of mobile devices in an educational context to support learning has drawn considerable attention, however, there is relatively little systematic knowledge about how it can be used effectively as a knowledge acquisition tool in workplace environments. This paper proposes mobile learning (m-learning) as a just-in-time learning tool to support and manage ICT problem related calls in a Science Museum (SM). Employees' intention to use m-learning is investigated using the Unified Theory of Acceptance and Use of Technology (UTAUT) model. Selected UTAUT factors including performance expectancy, effort expectancy, social influence and facilitating conditions are analysed to explain the determinants of m-learning adoption at the SM. Results demonstrate that the selected UTAUT factors had a significant impact on employee's behavioral intention to use m-learning at the SM. Further examination found age and gender moderate the relationship between the UTAUT factors. These findings present several useful implications for m-learning research and practice for ICT service desks.","PeriodicalId":445000,"journal":{"name":"2020 3rd International Conference on Information and Computer Technologies (ICICT)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130451469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}