In this paper, we focus on temporal property graphs, that is, property graphs whose labeled nodes and edges as well as the values of the properties associated with them may change with time. A key challenge in studying temporal graphs lies in detecting interesting events in their evolution, defined as time intervals of significant stability, growth, or shrinkage. To address this challenge, we build aggregated graphs, where nodes are grouped based on the values of their properties, and seek events at the aggregated level. To locate such events, we propose a novel approach based on unified evolution skylines. A unified evolution skyline assesses the significance of an event in conjunction with the duration of the interval in which the event occurs. Significance is measured by a set of counts, where each count refers to the number of graph elements that remain stable, are created, or deleted, for a specific property value. Lastly, we share experimental findings that highlight the efficiency and effectiveness of our approach.
{"title":"Skyline-based Exploration of Temporal Property Graphs","authors":"Evangelia Tsoukanara, Georgia Koloniari, Evaggelia Pitoura","doi":"10.1007/s10796-024-10505-x","DOIUrl":"https://doi.org/10.1007/s10796-024-10505-x","url":null,"abstract":"<p>In this paper, we focus on temporal property graphs, that is, property graphs whose labeled nodes and edges as well as the values of the properties associated with them may change with time. A key challenge in studying temporal graphs lies in detecting interesting events in their evolution, defined as time intervals of significant stability, growth, or shrinkage. To address this challenge, we build aggregated graphs, where nodes are grouped based on the values of their properties, and seek events at the aggregated level. To locate such events, we propose a novel approach based on <i>unified evolution skylines</i>. A unified evolution skyline assesses the significance of an event in conjunction with the duration of the interval in which the event occurs. Significance is measured by a set of counts, where each count refers to the number of graph elements that remain stable, are created, or deleted, for a specific property value. Lastly, we share experimental findings that highlight the efficiency and effectiveness of our approach.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"19 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141453104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Querying in isolation lacks the potential of reusing intermediate results, which ends up wasting computational resources. Multi-Query Optimization (MQO) addresses this challenge by devising a shared execution strategy across queries, with two generally used strategies: batched or cached. These strategies are shown to improve performance, but hardly any study explores the combination of both. In this work we explore such a hybrid MQO, combining batching (Shared Sub-Expression) and caching (Materialized View Reuse) techniques. Our hybrid-MQO system merges batched query results as well as caches the intermediate results, thereby any new query is given a path within the previous plan as well as reusing the results. Since caching is a key component for improving performance, we measure the impact of common caching techniques such as FIFO, LRU, MRU and LFU. Our results show LRU to be the optimal for our usecase, which we use in our subsequent evaluations. To study the influence of batching, we vary the factor - derivability - which represents the similarity of the results within a query batch. Similarly, we vary the cache sizes to study the influence of caching. Moreover, we also study the role of different database operators in the performance of our hybrid system. The results suggest that, depending on the individual operators, our hybrid method gains a speed-up between 4x to a slowdown of 2x from using MQO techniques in isolation. Furthermore, our results show that workloads with a generously sized cache that contain similar queries benefit from using our hybrid method, with an observed speed-up of 2x over sequential execution in the best case.
{"title":"Exploiting Shared Sub-Expression and Materialized View Reuse for Multi-Query Optimization","authors":"Bala Gurumurthy, Vasudev Raghavendra Bidarkar, David Broneske, Thilo Pionteck, Gunter Saake","doi":"10.1007/s10796-024-10506-w","DOIUrl":"https://doi.org/10.1007/s10796-024-10506-w","url":null,"abstract":"<p>Querying in isolation lacks the potential of reusing intermediate results, which ends up wasting computational resources. Multi-Query Optimization (MQO) addresses this challenge by devising a shared execution strategy across queries, with two generally used strategies: <i>batched</i> or <i>cached</i>. These strategies are shown to improve performance, but hardly any study explores the combination of both. In this work we explore such a hybrid MQO, combining batching (Shared Sub-Expression) and caching (Materialized View Reuse) techniques. Our hybrid-MQO system merges batched query results as well as caches the intermediate results, thereby any new query is given a path within the previous plan as well as reusing the results. Since caching is a key component for improving performance, we measure the impact of common caching techniques such as FIFO, LRU, MRU and LFU. Our results show LRU to be the optimal for our usecase, which we use in our subsequent evaluations. To study the influence of batching, we vary the factor - <span>derivability</span> - which represents the similarity of the results within a query batch. Similarly, we vary the cache sizes to study the influence of caching. Moreover, we also study the role of different database operators in the performance of our hybrid system. The results suggest that, depending on the individual operators, our hybrid method gains a speed-up between 4x to a slowdown of 2x from using MQO techniques in isolation. Furthermore, our results show that workloads with a generously sized cache that contain similar queries benefit from using our hybrid method, with an observed speed-up of 2x over sequential execution in the best case.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"1 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141448351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-24DOI: 10.1007/s10796-024-10487-w
Jin Sik Kim, Jinsoo Yeo, Hemant Jain
This paper examines the potential for collaboration between countries with differential resource endowments to advance AI innovation and achieve mutual economic benefits. Our framework juxtaposes economies with a comparative advantage in AI-capital and those with a comparative advantage in tech-labor, analyzing how these endowments can lead to enhanced comparative advantages over time. Through the application of various production functions and the use of Edgeworth boxes, our analysis reveals that strategic collaboration based on comparative advantage can yield Pareto improvements for both developed and developing countries. Nonetheless, this study also discusses the challenges of uneven benefit distribution, particularly the risk of “brain drain” from developing nations. Contributing to the discourse on the economics of AI and international collaboration, this study highlights the importance of thoughtful strategic planning to promote equitable and sustainable AI development worldwide.
{"title":"An Economic Framework for Creating AI-Augmented Solutions Across Countries Over Time","authors":"Jin Sik Kim, Jinsoo Yeo, Hemant Jain","doi":"10.1007/s10796-024-10487-w","DOIUrl":"https://doi.org/10.1007/s10796-024-10487-w","url":null,"abstract":"<p>This paper examines the potential for collaboration between countries with differential resource endowments to advance AI innovation and achieve mutual economic benefits. Our framework juxtaposes economies with a comparative advantage in <i>AI-capital</i> and those with a comparative advantage in <i>tech-labor</i>, analyzing how these endowments can lead to enhanced comparative advantages over time. Through the application of various production functions and the use of Edgeworth boxes, our analysis reveals that strategic collaboration based on comparative advantage can yield Pareto improvements for both developed and developing countries. Nonetheless, this study also discusses the challenges of uneven benefit distribution, particularly the risk of “brain drain” from developing nations. Contributing to the discourse on the economics of AI and international collaboration, this study highlights the importance of thoughtful strategic planning to promote equitable and sustainable AI development worldwide.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"82 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-24DOI: 10.1007/s10796-024-10507-9
Tiago Filipe Rodrigues Ribeiro, Fernando José Mateus da Silva, Rogério Luís de Carvalho Costa
Forest fires have far-reaching consequences, threatening human life, economic stability, and the environment. Understanding the dynamics of forest fires is crucial, especially in high-incidence regions. In this work, we apply deep networks to simulate the spatiotemporal progression of the area burnt in a forest fire. We tackle the region interpolation problem challenge by using a Conditional Variational Autoencoder (CVAE) model and generate in-between representations on the evolution of the burnt area. We also apply a CVAE model to forecast the progression of fire propagation, estimating the burnt area at distinct horizons and propagation stages. We evaluate our approach against other established techniques using real-world data. The results demonstrate that our method is competitive in geometric similarity metrics and exhibits superior temporal consistency for in-between representation generation. In the context of burnt area forecasting, our approach achieves scores of 90% for similarity and 99% for temporal consistency. These findings suggest that CVAE models may be a viable alternative for modeling the spatiotemporal evolution of 2D moving regions of forest fire evolution.
{"title":"Modelling forest fire dynamics using conditional variational autoencoders","authors":"Tiago Filipe Rodrigues Ribeiro, Fernando José Mateus da Silva, Rogério Luís de Carvalho Costa","doi":"10.1007/s10796-024-10507-9","DOIUrl":"https://doi.org/10.1007/s10796-024-10507-9","url":null,"abstract":"<p>Forest fires have far-reaching consequences, threatening human life, economic stability, and the environment. Understanding the dynamics of forest fires is crucial, especially in high-incidence regions. In this work, we apply deep networks to simulate the spatiotemporal progression of the area burnt in a forest fire. We tackle the region interpolation problem challenge by using a Conditional Variational Autoencoder (CVAE) model and generate in-between representations on the evolution of the burnt area. We also apply a CVAE model to forecast the progression of fire propagation, estimating the burnt area at distinct horizons and propagation stages. We evaluate our approach against other established techniques using real-world data. The results demonstrate that our method is competitive in geometric similarity metrics and exhibits superior temporal consistency for in-between representation generation. In the context of burnt area forecasting, our approach achieves scores of 90% for similarity and 99% for temporal consistency. These findings suggest that CVAE models may be a viable alternative for modeling the spatiotemporal evolution of 2D moving regions of forest fire evolution.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"54 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141444896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-12DOI: 10.1007/s10796-024-10499-6
Bianca-Ştefania Munteanu, Alexandra Murariu, Mǎrioara Nichitean, Luminiţa-Gabriela Pitac, Laura Dioşan
Breast cancer represents one of the leading causes of death among women, with 1 in 39 (around 2.5%) of them losing their lives annually, at the global level. According to the American Cancer Society, it is the second most lethal type of cancer in females, preceded only by lung cancer. Early diagnosis is crucial in increasing the chances of survival. In recent years, the incidence rate has increased by 0.5% per year, with 1 in 8 women at increased risk of developing a tumor during their life. Despite technological advances, there are still difficulties in identifying, characterizing, and accurately monitoring malignant tumors. The main focus of this article is on the computerized diagnosis of breast cancer. The main objective is to solve this problem using intelligent algorithms, that are built with artificial neural networks and involve 3 important steps: augmentation, segmentation, and classification. The experiment was made using a publicly available dataset that contains medical ultrasound images, collected from approximately 600 female patients (it is considered a benchmark). The results of the experiment are close to the goal set by our team. The final accuracy obtained is 86%.
{"title":"Value of Original and Generated Ultrasound Data Towards Training Robust Classifiers for Breast Cancer Identification","authors":"Bianca-Ştefania Munteanu, Alexandra Murariu, Mǎrioara Nichitean, Luminiţa-Gabriela Pitac, Laura Dioşan","doi":"10.1007/s10796-024-10499-6","DOIUrl":"https://doi.org/10.1007/s10796-024-10499-6","url":null,"abstract":"<p>Breast cancer represents one of the leading causes of death among women, with 1 in 39 (around 2.5%) of them losing their lives annually, at the global level. According to the American Cancer Society, it is the second most lethal type of cancer in females, preceded only by lung cancer. Early diagnosis is crucial in increasing the chances of survival. In recent years, the incidence rate has increased by 0.5% per year, with 1 in 8 women at increased risk of developing a tumor during their life. Despite technological advances, there are still difficulties in identifying, characterizing, and accurately monitoring malignant tumors. The main focus of this article is on the computerized diagnosis of breast cancer. The main objective is to solve this problem using intelligent algorithms, that are built with artificial neural networks and involve 3 important steps: augmentation, segmentation, and classification. The experiment was made using a publicly available dataset that contains medical ultrasound images, collected from approximately 600 female patients (it is considered a benchmark). The results of the experiment are close to the goal set by our team. The final accuracy obtained is 86%.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"1 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-06-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141309086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-11DOI: 10.1007/s10796-024-10501-1
Hendrik de Waal, Serge Nyawa, Samuel Fosso Wamba
This paper shows how transactional bank account data can be used to predict and to prevent financial distress in consumers. Machine learning methods were used to identify the most significant transactional behaviours that cause financial distress. We show that Random Forest outperforms the other machine learning models when predicting the financial distress of a consumer. We obtain that Fees and Interest paid stand out as primary contributors of financial distress, emphasizing the significance of financial charges and interest payments in gauging individuals’ financial vulnerability. Using Local Interpretable Model-agnostic Explanations, we study the marginal effect of transactional behaviours on the probability of being in financial distress and assess how different variables selected across all the data point selection sets influence each case. We also propose prescriptions that can be communicated to the client to help the individual improve their financial wellbeing. This research used data from a major South African bank.
{"title":"Consumers’ Financial Distress: Prediction and Prescription Using Interpretable Machine Learning","authors":"Hendrik de Waal, Serge Nyawa, Samuel Fosso Wamba","doi":"10.1007/s10796-024-10501-1","DOIUrl":"https://doi.org/10.1007/s10796-024-10501-1","url":null,"abstract":"<p>This paper shows how transactional bank account data can be used to predict and to prevent financial distress in consumers. Machine learning methods were used to identify the most significant transactional behaviours that cause financial distress. We show that Random Forest outperforms the other machine learning models when predicting the financial distress of a consumer. We obtain that Fees and Interest paid stand out as primary contributors of financial distress, emphasizing the significance of financial charges and interest payments in gauging individuals’ financial vulnerability. Using Local Interpretable Model-agnostic Explanations, we study the marginal effect of transactional behaviours on the probability of being in financial distress and assess how different variables selected across all the data point selection sets influence each case. We also propose prescriptions that can be communicated to the client to help the individual improve their financial wellbeing. This research used data from a major South African bank.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"53 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-06-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141304342","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-07DOI: 10.1007/s10796-024-10492-z
Hasan Sildir, Onur Can Boy, Sahin Sarrafi
Soft sensors are used to calculate the real-time values of process variables which can be measured in the laboratory only or require expensive online measurement tools. A set of mathematical expressions are developed and trained from historical data to exploit the statistical knowledge between online and offline measurements to ensure a reliable prediction performance, for optimization and control purposes. This study focuses on the development of a mixed-integer optimization problem to perform input selection and outlier filtering simultaneously using rigorous algorithms during the training procedure, unlike traditional heuristic and sequential methods. Nonlinearities and nonconvexities in the optimization problem is further tailored for global optimality and computational advancements by reformulations and piecewise linearizations to address the complexity of the task with additional binary variables, representing the selection of a particular input or data. The proposed approach is implemented on actual data from two different industrial plants and compared to traditional approach.
{"title":"A Mixed-Integer Formulation for the Simultaneous Input Selection and Outlier Filtering in Soft Sensor Training","authors":"Hasan Sildir, Onur Can Boy, Sahin Sarrafi","doi":"10.1007/s10796-024-10492-z","DOIUrl":"https://doi.org/10.1007/s10796-024-10492-z","url":null,"abstract":"<p>Soft sensors are used to calculate the real-time values of process variables which can be measured in the laboratory only or require expensive online measurement tools. A set of mathematical expressions are developed and trained from historical data to exploit the statistical knowledge between online and offline measurements to ensure a reliable prediction performance, for optimization and control purposes. This study focuses on the development of a mixed-integer optimization problem to perform input selection and outlier filtering simultaneously using rigorous algorithms during the training procedure, unlike traditional heuristic and sequential methods. Nonlinearities and nonconvexities in the optimization problem is further tailored for global optimality and computational advancements by reformulations and piecewise linearizations to address the complexity of the task with additional binary variables, representing the selection of a particular input or data. The proposed approach is implemented on actual data from two different industrial plants and compared to traditional approach.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"16 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-06-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141292746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-05DOI: 10.1007/s10796-024-10497-8
Parul Gupta, Apeksha Hooda, Anand Jeyaraj, Jonathan J.M. Seddon, Yogesh K. Dwivedi
Despite considerable research on the factors influencing the use of e-government, citizens are apprehensive of e-government services due to the concerns primarily related to trust, risk, security and privacy. This study presents a meta-analytic structural equation modeling (MASEM) analysis of the findings reported by 68 prior empirical studies on e-government adoption. Specifically, the model examined the direct effects of trust in government, trust in internet, perceived risk, and perceived privacy and security on e-government trust, and its impact on users’ behavioral intention to use e-government. The findings bear significant theoretical and practical implications.
{"title":"Trust, Risk, Privacy and Security in e-Government Use: Insights from a MASEM Analysis","authors":"Parul Gupta, Apeksha Hooda, Anand Jeyaraj, Jonathan J.M. Seddon, Yogesh K. Dwivedi","doi":"10.1007/s10796-024-10497-8","DOIUrl":"https://doi.org/10.1007/s10796-024-10497-8","url":null,"abstract":"<p>Despite considerable research on the factors influencing the use of e-government, citizens are apprehensive of e-government services due to the concerns primarily related to trust, risk, security and privacy. This study presents a meta-analytic structural equation modeling (MASEM) analysis of the findings reported by 68 prior empirical studies on e-government adoption. Specifically, the model examined the direct effects of trust in government, trust in internet, perceived risk, and perceived privacy and security on e-government trust, and its impact on users’ behavioral intention to use e-government. The findings bear significant theoretical and practical implications.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"124 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-06-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141251597","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-31DOI: 10.1007/s10796-024-10495-w
Christina Khnaisser, Hind Hamrouni, David B. Blumenthal, Anton Dignös, Johann Gamper
Time and temporal constraints are implicit in most databases. To facilitate data analysis and quality assessment, a database should provide explicit operations to identify the violation of temporal constraints. Against this background, the purpose of this paper is threefold: (1) we identify and provide a formal definition of five common anomalies in temporal databases, (2) we propose two new relational operations that allow, respectively, to label anomalous tuples in and to retrieve the anomalous tuples from a dataset, and (3) we provide three different SQL implementations of these operations for current relational database management systems. The healthcare domain is used to illustrate the usage and utility of the temporal anomalies. Finally, an experimental evaluation on real-world and synthetic data analyses the performance of the different implementations of the anomaly operators.
{"title":"Efficiently Labeling and Retrieving Temporal Anomalies in Relational Databases","authors":"Christina Khnaisser, Hind Hamrouni, David B. Blumenthal, Anton Dignös, Johann Gamper","doi":"10.1007/s10796-024-10495-w","DOIUrl":"https://doi.org/10.1007/s10796-024-10495-w","url":null,"abstract":"<p>Time and temporal constraints are implicit in most databases. To facilitate data analysis and quality assessment, a database should provide explicit operations to identify the violation of temporal constraints. Against this background, the purpose of this paper is threefold: (1) we identify and provide a formal definition of five common anomalies in temporal databases, (2) we propose two new relational operations that allow, respectively, to label anomalous tuples in and to retrieve the anomalous tuples from a dataset, and (3) we provide three different SQL implementations of these operations for current relational database management systems. The healthcare domain is used to illustrate the usage and utility of the temporal anomalies. Finally, an experimental evaluation on real-world and synthetic data analyses the performance of the different implementations of the anomaly operators.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"318 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141182387","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-27DOI: 10.1007/s10796-024-10491-0
Priveena Thanabalan, Ali Vafaei-Zadeh, Haniruzila Hanifah, T. Ramayah
The objective of this paper is to investigate the factors that influence the adoption of Big Data Analytics (BDA) in manufacturing companies and examine the impact of BDA adoption on performance, while also considering the moderating effect of data-driven culture. An online questionnaire survey was conducted with medium and large manufacturing companies in Malaysia, resulting in a total of 267 responses collected through non-probability purposive sampling. The results show that technology complexity, perceived relative advantage, top management support, IT infrastructure and capabilities, normative pressure, and mimetic pressure are significant determinants of BDA adoption. Moreover, the adoption of BDA has a positive impact on financial and market performance, with data-driven culture moderating the relationship between BDA adoption and financial performance. This study highlights the critical factors that contribute to BDA adoption and its outcomes, providing manufacturing companies with awareness on this topic.
{"title":"Big Data Analytics Adoption in Manufacturing Companies: The Contingent Role of Data-Driven Culture","authors":"Priveena Thanabalan, Ali Vafaei-Zadeh, Haniruzila Hanifah, T. Ramayah","doi":"10.1007/s10796-024-10491-0","DOIUrl":"https://doi.org/10.1007/s10796-024-10491-0","url":null,"abstract":"<p>The objective of this paper is to investigate the factors that influence the adoption of Big Data Analytics (BDA) in manufacturing companies and examine the impact of BDA adoption on performance, while also considering the moderating effect of data-driven culture. An online questionnaire survey was conducted with medium and large manufacturing companies in Malaysia, resulting in a total of 267 responses collected through non-probability purposive sampling. The results show that technology complexity, perceived relative advantage, top management support, IT infrastructure and capabilities, normative pressure, and mimetic pressure are significant determinants of BDA adoption. Moreover, the adoption of BDA has a positive impact on financial and market performance, with data-driven culture moderating the relationship between BDA adoption and financial performance. This study highlights the critical factors that contribute to BDA adoption and its outcomes, providing manufacturing companies with awareness on this topic.</p>","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"34 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2024-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141156682","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}