People see the world and convey their perception of it with narratives. In an information system context, stories are told and collected when the systems are developed. Requirements elicitation is largely dependent on communication between systems designers and users. Thus, stories have a significant impact on conceptualizing future users' needs. This paper presents a literature review on how stories and narratives have been considered in central IS literature. Narrative-theoretical parameters are used as a lens to analyze the literature. This shows that explicit discussion is non-existent, and the characteristics are considered partially. The result is a biased and narrow understanding of the informants' needs and wishes. This may be significant in the requirements because narratives are not as simple a form of communication as is usually assumed. It is proposed that better understanding narratives would equip systems analysts with an in-depth understanding about the nuances inherent in communication when communicating with users.
{"title":"Narrativization in Information Systems Development","authors":"Pasi Raatikainen, Samuli Pekkola, Maria Mäkelä","doi":"10.4018/jdm.333471","DOIUrl":"https://doi.org/10.4018/jdm.333471","url":null,"abstract":"People see the world and convey their perception of it with narratives. In an information system context, stories are told and collected when the systems are developed. Requirements elicitation is largely dependent on communication between systems designers and users. Thus, stories have a significant impact on conceptualizing future users' needs. This paper presents a literature review on how stories and narratives have been considered in central IS literature. Narrative-theoretical parameters are used as a lens to analyze the literature. This shows that explicit discussion is non-existent, and the characteristics are considered partially. The result is a biased and narrow understanding of the informants' needs and wishes. This may be significant in the requirements because narratives are not as simple a form of communication as is usually assumed. It is proposed that better understanding narratives would equip systems analysts with an in-depth understanding about the nuances inherent in communication when communicating with users.","PeriodicalId":51086,"journal":{"name":"Journal of Database Management","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-11-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135480524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Job automation is a critical decision that has brought about profound changes in the workplace. However, the question of what drives job automation remains unclear. This study conducts an interdisciplinary review of five theoretical frameworks on job automation, paying particular attention to the role played by artificial intelligence and machine learning. It highlights the concepts and mechanisms underlying each of the frameworks, compares and contrasts their similarities and differences, and highlights challenges and suggests opportunities of job automation. It also proposes an integrated framework on job automation by addressing the research gaps in extant frameworks and thereby contributes to the research and practice on this important topic.
{"title":"Artificial Intelligence and Machine Learning for Job Automation","authors":"Gang Peng, R. Bhaskar","doi":"10.4018/jdm.318455","DOIUrl":"https://doi.org/10.4018/jdm.318455","url":null,"abstract":"Job automation is a critical decision that has brought about profound changes in the workplace. However, the question of what drives job automation remains unclear. This study conducts an interdisciplinary review of five theoretical frameworks on job automation, paying particular attention to the role played by artificial intelligence and machine learning. It highlights the concepts and mechanisms underlying each of the frameworks, compares and contrasts their similarities and differences, and highlights challenges and suggests opportunities of job automation. It also proposes an integrated framework on job automation by addressing the research gaps in extant frameworks and thereby contributes to the research and practice on this important topic.","PeriodicalId":51086,"journal":{"name":"Journal of Database Management","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2023-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47109311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of technologies such as big data analysis and deep learning, various industries have begun to integrate with big data analysis and deep learning and continue to promote the development of the industry. This system is an intelligent writing scoring system for college English teaching. It uses popular big data analysis and deep learning to distinguish training algorithms. From 2015 to 2022, the number of college students taking exams has increased yearly, with an increase of more than 50%. Therefore, the system proposes a text vector calculation method that can find matching samples in the text set after the text is weighted by the weight function and uses deep learning to distinguish the algorithm evaluates the matched text, and finally can get the final score according to the content quality, semantic coherence, text readability, and other aspects of the text. Compared with traditional manual scoring, this technology is more convenient, quick, concise, and effective. This system is significant for improving the efficiency of teaching English writing in college.
{"title":"College English Intelligent Writing Score System Based on Big Data Analysis and Deep Learning Algorithm","authors":"Fei Qin","doi":"10.4018/jdm.314561","DOIUrl":"https://doi.org/10.4018/jdm.314561","url":null,"abstract":"With the development of technologies such as big data analysis and deep learning, various industries have begun to integrate with big data analysis and deep learning and continue to promote the development of the industry. This system is an intelligent writing scoring system for college English teaching. It uses popular big data analysis and deep learning to distinguish training algorithms. From 2015 to 2022, the number of college students taking exams has increased yearly, with an increase of more than 50%. Therefore, the system proposes a text vector calculation method that can find matching samples in the text set after the text is weighted by the weight function and uses deep learning to distinguish the algorithm evaluates the matched text, and finally can get the final score according to the content quality, semantic coherence, text readability, and other aspects of the text. Compared with traditional manual scoring, this technology is more convenient, quick, concise, and effective. This system is significant for improving the efficiency of teaching English writing in college.","PeriodicalId":51086,"journal":{"name":"Journal of Database Management","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-11-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41382831","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deep learning models are more capable of handling large and complex datasets that generally appear in the insurance industry than traditional machine learning models. In this study, transfer learning was employed to build and optimize a simulated automobile damage assessment system. Several classic deep learning methods were applied to extract features from original and augmented automobile damage images. Then, traditional machine learning and cross-validation techniques were applied to train and validate the system. The proposed deep learning model demonstrated advantages over traditional machine learning models regarding features extraction and accuracy. Deep learning approaches fused with logistic regression and support vector machine were found performing as well as those with artificial neural networks under two simulated scenarios. With the proposed method, automobile damage images can be evaluated for insurance adjustment purposes automatically, based on the acquired input. Hence, insurers can automate the claim and adjustment process, thereby achieving cost and time savings.
{"title":"Deep Convolutional Neural Networks With Transfer Learning for Automobile Damage Image Classification","authors":"Xiaoguang Tian, Henry Han","doi":"10.4018/jdm.309738","DOIUrl":"https://doi.org/10.4018/jdm.309738","url":null,"abstract":"Deep learning models are more capable of handling large and complex datasets that generally appear in the insurance industry than traditional machine learning models. In this study, transfer learning was employed to build and optimize a simulated automobile damage assessment system. Several classic deep learning methods were applied to extract features from original and augmented automobile damage images. Then, traditional machine learning and cross-validation techniques were applied to train and validate the system. The proposed deep learning model demonstrated advantages over traditional machine learning models regarding features extraction and accuracy. Deep learning approaches fused with logistic regression and support vector machine were found performing as well as those with artificial neural networks under two simulated scenarios. With the proposed method, automobile damage images can be evaluated for insurance adjustment purposes automatically, based on the acquired input. Hence, insurers can automate the claim and adjustment process, thereby achieving cost and time savings.","PeriodicalId":51086,"journal":{"name":"Journal of Database Management","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41312529","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shih-Yi Cheng, Jinbao Zhang, Zhan Gao, Jiehua Wang
Breathing is an important physiological process in the human body. The wavelet transform method can extract respiratory information from electrocardiogram (ECG) data; thus, the authors designed an integrated circuit of ECG-derived respiration (EDR). They propose a discrete wavelet transform (DWT) EDR algorithm based on an analysis of the heartbeat frequency and respiration. They verified the algorithm in both the time domain and the frequency domain using Matlab. Next, the DWT EDR digital circuit was designed using the QUARTUS program. Finally, they used a field programmable gate array (FPGA) for downloading and simulation, and they verified the designed circuits using a logic analyzer, where they compared the waveform of the data obtained from the EDR circuit with the waveform obtained after processing the wavelet transform EDR in Matlab. The experimental results showed that the circuit can allow the extraction of respiratory information from ECG data.
{"title":"Circuit Implementation of Respiratory Information Extracted from Electrocardiograms","authors":"Shih-Yi Cheng, Jinbao Zhang, Zhan Gao, Jiehua Wang","doi":"10.4018/jdm.314211","DOIUrl":"https://doi.org/10.4018/jdm.314211","url":null,"abstract":"Breathing is an important physiological process in the human body. The wavelet transform method can extract respiratory information from electrocardiogram (ECG) data; thus, the authors designed an integrated circuit of ECG-derived respiration (EDR). They propose a discrete wavelet transform (DWT) EDR algorithm based on an analysis of the heartbeat frequency and respiration. They verified the algorithm in both the time domain and the frequency domain using Matlab. Next, the DWT EDR digital circuit was designed using the QUARTUS program. Finally, they used a field programmable gate array (FPGA) for downloading and simulation, and they verified the designed circuits using a logic analyzer, where they compared the waveform of the data obtained from the EDR circuit with the waveform obtained after processing the wavelet transform EDR in Matlab. The experimental results showed that the circuit can allow the extraction of respiratory information from ECG data.","PeriodicalId":51086,"journal":{"name":"Journal of Database Management","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47665298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Lan Huang, Yuanwei Zhao, P. Mestre, Laipeng Han, Kangping Wang, Wenjuan Gao, Rui Zhang
Reverse skyline query is an extension of the classical skyline query, widely used in the decision support in e-business. The vast burst of big data in e-business challenges the classical algorithms for such queries. This paper provides a novel definition of decision set and a decision set based reverse skyline query method called DRS on the double-layer R tree indexing in a map-reduce manner. Theoretical proofs are provided for the correctness and complexity of the DRS algorithm. Experiments made using several large data sets are presented and analyzed to illustrate the applicability and the outperformance of DRS over the state-of-the-art reverse skyline query methods.
{"title":"Research on Reverse Skyline Query Algorithm Based on Decision Set","authors":"Lan Huang, Yuanwei Zhao, P. Mestre, Laipeng Han, Kangping Wang, Wenjuan Gao, Rui Zhang","doi":"10.4018/jdm.313971","DOIUrl":"https://doi.org/10.4018/jdm.313971","url":null,"abstract":"Reverse skyline query is an extension of the classical skyline query, widely used in the decision support in e-business. The vast burst of big data in e-business challenges the classical algorithms for such queries. This paper provides a novel definition of decision set and a decision set based reverse skyline query method called DRS on the double-layer R tree indexing in a map-reduce manner. Theoretical proofs are provided for the correctness and complexity of the DRS algorithm. Experiments made using several large data sets are presented and analyzed to illustrate the applicability and the outperformance of DRS over the state-of-the-art reverse skyline query methods.","PeriodicalId":51086,"journal":{"name":"Journal of Database Management","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41507217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Arodh Lal Karn, G. Bagale, Bhavana Raj Kondamudi., D. Srivastava, R. Gupta, Sudhakar Sengan
Traditional banks face the issue of risk diversification, and it is dealt with when they evolve into financial institutions. So, the present study aims to investigate banking and off-balance sheet (OBS)-based risks and regulatory changes in certain age-old South Asian (SA) banks and finds the tenacity of the OBS in the long run. For these research goals, two estimates are applied: fixed effects (FE) and generalized method of moments (GMM). Using FE, the researchers estimate the realm and time for finding financial shocks and other time-related factors affecting the SA countries. The majority of findings reveal a constant market theory stating the performance of SA in assessing OBS-related risks. Banks in SA also seem to follow the market regulatory and TT in capital needs that will incentivize banks to take too much risk in off-balance sheet activities (OBSA). The research findings are practically applied to bank-related risks, pressure from regulatory restructuring, and dangers from the systematic factors beneficial to policymakers and practitioners.
{"title":"Measuring the Determining Factors of Financial Development of Commercial Banks in Selected SAARC Countries","authors":"Arodh Lal Karn, G. Bagale, Bhavana Raj Kondamudi., D. Srivastava, R. Gupta, Sudhakar Sengan","doi":"10.4018/jdm.311092","DOIUrl":"https://doi.org/10.4018/jdm.311092","url":null,"abstract":"Traditional banks face the issue of risk diversification, and it is dealt with when they evolve into financial institutions. So, the present study aims to investigate banking and off-balance sheet (OBS)-based risks and regulatory changes in certain age-old South Asian (SA) banks and finds the tenacity of the OBS in the long run. For these research goals, two estimates are applied: fixed effects (FE) and generalized method of moments (GMM). Using FE, the researchers estimate the realm and time for finding financial shocks and other time-related factors affecting the SA countries. The majority of findings reveal a constant market theory stating the performance of SA in assessing OBS-related risks. Banks in SA also seem to follow the market regulatory and TT in capital needs that will incentivize banks to take too much risk in off-balance sheet activities (OBSA). The research findings are practically applied to bank-related risks, pressure from regulatory restructuring, and dangers from the systematic factors beneficial to policymakers and practitioners.","PeriodicalId":51086,"journal":{"name":"Journal of Database Management","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49092300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This research investigates how individual emotional and rational components of software testing service evaluations impact behavioral intentions associated with the software testing service, and how specific, theory-driven service characteristics (complexity, proximity, and output specificity) impact the emotional and rational components of the software testing service evaluation. A controlled experiment is used, and the results indicate that (1) both emotional and rational components of software testing service evaluation have significant impacts on behavioral intentions associated with the software testing service, (2) the specificity of testing service output impacts both the emotional and rational evaluations of the software testing service, (3) the complexity of the testing service task only influences the emotional component, and (4) the proximity between the testing service provider and recipient has no significant impact on the emotional evaluation of the service.
{"title":"Emotional and Rational Components in Software Testing Service Evaluation","authors":"C. Onita, J. Dhaliwal, Xihui Zhang","doi":"10.4018/jdm.313969","DOIUrl":"https://doi.org/10.4018/jdm.313969","url":null,"abstract":"This research investigates how individual emotional and rational components of software testing service evaluations impact behavioral intentions associated with the software testing service, and how specific, theory-driven service characteristics (complexity, proximity, and output specificity) impact the emotional and rational components of the software testing service evaluation. A controlled experiment is used, and the results indicate that (1) both emotional and rational components of software testing service evaluation have significant impacts on behavioral intentions associated with the software testing service, (2) the specificity of testing service output impacts both the emotional and rational evaluations of the software testing service, (3) the complexity of the testing service task only influences the emotional component, and (4) the proximity between the testing service provider and recipient has no significant impact on the emotional evaluation of the service.","PeriodicalId":51086,"journal":{"name":"Journal of Database Management","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44188077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Software testing is becoming more critical to ensure that software function properly. As the time, effort, and funds invested in software testing activities have been increased significantly, these resources still cannot meet the increasing demand of software testing. Managers must allocate testing resources to the test cases effectively in uncovering important defects. This study builds a value function that can quantify the relative value of a test case and thus play a significant role in prioritizing test cases, addressing the resource constraint issues in software testing, and serving as a foundation of AI for software testing. The authors conducted a Monte Carlo simulation to exhibit application of the final value function.
{"title":"A Quantitative Function for Estimating the Comparative Values of Software Test Cases","authors":"","doi":"10.4018/jdm.299559","DOIUrl":"https://doi.org/10.4018/jdm.299559","url":null,"abstract":"Software testing is becoming more critical to ensure that software function properly. As the time, effort, and funds invested in software testing activities have been increased significantly, these resources still cannot meet the increasing demand of software testing. Managers must allocate testing resources to the test cases effectively in uncovering important defects. This study builds a value function that can quantify the relative value of a test case and thus play a significant role in prioritizing test cases, addressing the resource constraint issues in software testing, and serving as a foundation of AI for software testing. The authors conducted a Monte Carlo simulation to exhibit application of the final value function.","PeriodicalId":51086,"journal":{"name":"Journal of Database Management","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85385109","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This study explores how the quality and credibility of information on healthcare websites can be enhanced through the simultaneous delivery of multiple information sources. Such information integration is achieved using a new experimental Web API called Portals. Accessing multiple information sources is salient due to the uncertainty surrounding an overwhelming amount of online health information that can be incorrect or misleading. When readers seek health information, the almost instantaneous verification that comes with the possibility of multiple-sources assessment is critical. This research provides novel insights that establish the value of reliably integrating health content from multiple sources (websites). The behavioral differences when people encounter consistent or inconsistent information from multiple-websites integration are investigated, and the implication of the findings are discussed. The research findings guide how health-related websites can help online seekers access higher-quality healthcare websites.
这项研究探讨了如何通过同时提供多个信息源来提高医疗保健网站上信息的质量和可信度。这种信息集成是使用一种新的实验性Web API Portals实现的。由于大量在线健康信息可能是不正确或误导性的,因此访问多个信息源非常重要。当读者寻求健康信息时,多来源评估的可能性所带来的几乎即时的验证至关重要。这项研究提供了新的见解,确立了可靠整合来自多个来源(网站)的健康内容的价值。研究了当人们遇到来自多个网站整合的一致或不一致信息时的行为差异,并讨论了研究结果的含义。研究结果指导了健康相关网站如何帮助在线寻求者访问更高质量的医疗保健网站。
{"title":"Bringing Credibility Through Portals","authors":"Yi-Chen Lee, Bo-Chun Shen, C. Sia","doi":"10.4018/jdm.313968","DOIUrl":"https://doi.org/10.4018/jdm.313968","url":null,"abstract":"This study explores how the quality and credibility of information on healthcare websites can be enhanced through the simultaneous delivery of multiple information sources. Such information integration is achieved using a new experimental Web API called Portals. Accessing multiple information sources is salient due to the uncertainty surrounding an overwhelming amount of online health information that can be incorrect or misleading. When readers seek health information, the almost instantaneous verification that comes with the possibility of multiple-sources assessment is critical. This research provides novel insights that establish the value of reliably integrating health content from multiple sources (websites). The behavioral differences when people encounter consistent or inconsistent information from multiple-websites integration are investigated, and the implication of the findings are discussed. The research findings guide how health-related websites can help online seekers access higher-quality healthcare websites.","PeriodicalId":51086,"journal":{"name":"Journal of Database Management","volume":null,"pages":null},"PeriodicalIF":2.6,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43186488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}