Matt Baucum, Anahita Khojandi, Carole R. Myers, Lawrence M. Kessler
Substance use disorder (SUD) exacts a substantial economic and social cost in the United States, and it is crucial for SUD treatment providers to match patients with feasible, effective, and affordable treatment plans. The availability of large SUD patient datasets allows for machine learning techniques to predict patient-level SUD outcomes, yet there has been almost no research on whether machine learning can be used to optimize or personalize which treatment plans SUD patients receive. We use contextual bandits (a reinforcement learning technique) to optimally map patients to SUD treatment plans, based on dozens of patient-level and geographic covariates. We also use near-optimal policies to incorporate treatments’ time-intensiveness and cost into our recommendations, to aid treatment providers and policymakers in allocating treatment resources. Our personalized treatment recommendation policies are estimated to yield higher remission rates than observed in our original dataset, and they suggest clinical insights to inform future research on data-driven SUD treatment matching.
{"title":"Optimizing Substance Use Treatment Selection Using Reinforcement Learning","authors":"Matt Baucum, Anahita Khojandi, Carole R. Myers, Lawrence M. Kessler","doi":"10.1145/3563778","DOIUrl":"https://doi.org/10.1145/3563778","url":null,"abstract":"Substance use disorder (SUD) exacts a substantial economic and social cost in the United States, and it is crucial for SUD treatment providers to match patients with feasible, effective, and affordable treatment plans. The availability of large SUD patient datasets allows for machine learning techniques to predict patient-level SUD outcomes, yet there has been almost no research on whether machine learning can be used to optimize or personalize which treatment plans SUD patients receive. We use contextual bandits (a reinforcement learning technique) to optimally map patients to SUD treatment plans, based on dozens of patient-level and geographic covariates. We also use near-optimal policies to incorporate treatments’ time-intensiveness and cost into our recommendations, to aid treatment providers and policymakers in allocating treatment resources. Our personalized treatment recommendation policies are estimated to yield higher remission rates than observed in our original dataset, and they suggest clinical insights to inform future research on data-driven SUD treatment matching.","PeriodicalId":45274,"journal":{"name":"ACM Transactions on Management Information Systems","volume":"14 1","pages":"1 - 30"},"PeriodicalIF":2.5,"publicationDate":"2022-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42907493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yidong Chai, Hongyan Liu, Jie Xu, S. Samtani, Yuanchun Jiang, Haoxin Liu
Medical image annotation aims to automatically describe the content of medical images. It helps doctors to understand the content of medical images and make better informed decisions like diagnoses. Existing methods mainly follow the approach for natural images and fail to emphasize the object abnormalities, which is the essence of medical images annotation. In light of this, we propose to transform the medical image annotation to a multi-label classification problem, where object abnormalities are focused directly. However, extant multi-label classification studies rely on arduous feature engineering, or do not solve label correlation issues well in medical images. To solve these problems, we propose a novel deep learning model where a frequent pattern mining component and an adversarial-based denoising autoencoder component are introduced. Extensive experiments are conducted on a real retinal image dataset to evaluate the performance of the proposed model. Results indicate that the proposed model significantly outperforms image captioning baselines and multi-label classification baselines.
{"title":"A Multi-Label Classification with an Adversarial-Based Denoising Autoencoder for Medical Image Annotation","authors":"Yidong Chai, Hongyan Liu, Jie Xu, S. Samtani, Yuanchun Jiang, Haoxin Liu","doi":"10.1145/3561653","DOIUrl":"https://doi.org/10.1145/3561653","url":null,"abstract":"Medical image annotation aims to automatically describe the content of medical images. It helps doctors to understand the content of medical images and make better informed decisions like diagnoses. Existing methods mainly follow the approach for natural images and fail to emphasize the object abnormalities, which is the essence of medical images annotation. In light of this, we propose to transform the medical image annotation to a multi-label classification problem, where object abnormalities are focused directly. However, extant multi-label classification studies rely on arduous feature engineering, or do not solve label correlation issues well in medical images. To solve these problems, we propose a novel deep learning model where a frequent pattern mining component and an adversarial-based denoising autoencoder component are introduced. Extensive experiments are conducted on a real retinal image dataset to evaluate the performance of the proposed model. Results indicate that the proposed model significantly outperforms image captioning baselines and multi-label classification baselines.","PeriodicalId":45274,"journal":{"name":"ACM Transactions on Management Information Systems","volume":"14 1","pages":"1 - 21"},"PeriodicalIF":2.5,"publicationDate":"2022-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43505359","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial intelligence (AI) capabilities are increasingly common components of all socio-technical information systems that integrate human and machine actions. The impacts of AI components on the design and use of application systems are evolving rapidly as improved deep learning techniques and fresh big data sources afford effective and efficient solutions for broad ranges of applications. New goals and requirements for Human-AI System (HAIS) functions and qualities are emerging, whereas the boundaries between human and machine behaviors continue to blur. This research commentary identifies and addresses the design science research (DSR) challenges facing the field of Information Systems as the demand for human-machine synergies in Human-Artificial Intelligence Systems surges in all application areas. The design challenges of HAIS are characterized by a taxonomy of eight C's - composition, complexity, creativity, confidence, controls, conscience, certification, and contribution. By applying a design science research frame to structure and investigate HAIS design, implementation, use, and evolution, we propose a forward-thinking agenda for relevant and rigorous information systems research contributions.
{"title":"Research Challenges for the Design of Human-Artificial Intelligence Systems (HAIS)","authors":"A. Hevner, V. Storey","doi":"10.1145/3549547","DOIUrl":"https://doi.org/10.1145/3549547","url":null,"abstract":"Artificial intelligence (AI) capabilities are increasingly common components of all socio-technical information systems that integrate human and machine actions. The impacts of AI components on the design and use of application systems are evolving rapidly as improved deep learning techniques and fresh big data sources afford effective and efficient solutions for broad ranges of applications. New goals and requirements for Human-AI System (HAIS) functions and qualities are emerging, whereas the boundaries between human and machine behaviors continue to blur. This research commentary identifies and addresses the design science research (DSR) challenges facing the field of Information Systems as the demand for human-machine synergies in Human-Artificial Intelligence Systems surges in all application areas. The design challenges of HAIS are characterized by a taxonomy of eight C's - composition, complexity, creativity, confidence, controls, conscience, certification, and contribution. By applying a design science research frame to structure and investigate HAIS design, implementation, use, and evolution, we propose a forward-thinking agenda for relevant and rigorous information systems research contributions.","PeriodicalId":45274,"journal":{"name":"ACM Transactions on Management Information Systems","volume":"14 1","pages":"1 - 18"},"PeriodicalIF":2.5,"publicationDate":"2022-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46768619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of virtualization technology, cloud computing has emerged as a powerful and flexible platform for various services such as online trading. However, there are concerns about the survivability of cloud services in smart manufacturing. Most existing solutions provide a standby Virtual Machine (VM) for each running VM. However, this often leads to huge resource waste because VMs do not always run at full capacity. To reduce resource waste, we propose a smart survivability framework to efficiently allocate resources to standby VMs. Our framework contains two novel aspects: (1) a prediction mechanism to predict the resource utilization of each VM in order to reduce the number of standby VMs; and (2) a nested virtualization technology to refine the granularity of standby VMs. We will use an open-source cloud simulation platform named cloudsim, with real-world data, to verify the feasibility of the proposed framework and evaluate its performance. The proposed Smart Survivable Usable Virtual Machine (SSUVM) will predict resource utilization of VMs on Rack1 periodically. When errors happen in VMs, the framework will allocate standby resources according to the predicted result. The SSUVM will receive the latest running status of the failed VM and its mirror image to recover the VM's work.
{"title":"Allocation of Resources for Cloud Survivability in Smart Manufacturing","authors":"M. Nong, Lingfeng Huang, Mingtao Liu","doi":"10.1145/3533701","DOIUrl":"https://doi.org/10.1145/3533701","url":null,"abstract":"With the development of virtualization technology, cloud computing has emerged as a powerful and flexible platform for various services such as online trading. However, there are concerns about the survivability of cloud services in smart manufacturing. Most existing solutions provide a standby Virtual Machine (VM) for each running VM. However, this often leads to huge resource waste because VMs do not always run at full capacity. To reduce resource waste, we propose a smart survivability framework to efficiently allocate resources to standby VMs. Our framework contains two novel aspects: (1) a prediction mechanism to predict the resource utilization of each VM in order to reduce the number of standby VMs; and (2) a nested virtualization technology to refine the granularity of standby VMs. We will use an open-source cloud simulation platform named cloudsim, with real-world data, to verify the feasibility of the proposed framework and evaluate its performance. The proposed Smart Survivable Usable Virtual Machine (SSUVM) will predict resource utilization of VMs on Rack1 periodically. When errors happen in VMs, the framework will allocate standby resources according to the predicted result. The SSUVM will receive the latest running status of the failed VM and its mirror image to recover the VM's work.","PeriodicalId":45274,"journal":{"name":"ACM Transactions on Management Information Systems","volume":"13 1","pages":"1 - 11"},"PeriodicalIF":2.5,"publicationDate":"2022-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"64052237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rongli Chen, Xiaozhong Chen, Lei Wang, Jian-Xin Li
This research takes a case study approach to show the development of a diverse adoption and product strategy distinct from the core manufacturing industry process. It explains the development status in all aspects of smart manufacturing, via the example of ceramic circuit board manufacturing and electronic assembly, and outlines future smart manufacturing plans and processes. The research proposed two experiments using artificial intelligence and deep learning to demonstrate the problems and solutions regarding methods in manufacturing and factory facilities, respectively. In the first experiment, a Bayesian network inference is used to find the cause of the problem of metal residues between electronic circuits through key process and quality correlations. In the second experiment, a convolutional neural network is used to identify false defects that were overinspected during automatic optical inspection. This improves the manufacturing process by enhancing the yield rate and reducing cost. The contributions of the study built in circuit board production. Smart manufacturing, with the application of a Bayesian network to an Internet of Things setup, has addressed the problem of residue and redundant conductors on the edge of the ceramic circuit board pattern, and has improved and prevented leakage and high-frequency interference. The convolutional neural network and deep learning were used to improve the accuracy of the automatic optical inspection system, reduce the current manual review ratio, save labor costs, and provide defect classification as a reference for preprocess improvement.
{"title":"The Core Industry Manufacturing Process of Electronics Assembly Based on Smart Manufacturing","authors":"Rongli Chen, Xiaozhong Chen, Lei Wang, Jian-Xin Li","doi":"10.1145/3529098","DOIUrl":"https://doi.org/10.1145/3529098","url":null,"abstract":"This research takes a case study approach to show the development of a diverse adoption and product strategy distinct from the core manufacturing industry process. It explains the development status in all aspects of smart manufacturing, via the example of ceramic circuit board manufacturing and electronic assembly, and outlines future smart manufacturing plans and processes. The research proposed two experiments using artificial intelligence and deep learning to demonstrate the problems and solutions regarding methods in manufacturing and factory facilities, respectively. In the first experiment, a Bayesian network inference is used to find the cause of the problem of metal residues between electronic circuits through key process and quality correlations. In the second experiment, a convolutional neural network is used to identify false defects that were overinspected during automatic optical inspection. This improves the manufacturing process by enhancing the yield rate and reducing cost. The contributions of the study built in circuit board production. Smart manufacturing, with the application of a Bayesian network to an Internet of Things setup, has addressed the problem of residue and redundant conductors on the edge of the ceramic circuit board pattern, and has improved and prevented leakage and high-frequency interference. The convolutional neural network and deep learning were used to improve the accuracy of the automatic optical inspection system, reduce the current manual review ratio, save labor costs, and provide defect classification as a reference for preprocess improvement.","PeriodicalId":45274,"journal":{"name":"ACM Transactions on Management Information Systems","volume":"13 1","pages":"1 - 19"},"PeriodicalIF":2.5,"publicationDate":"2022-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47287792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Joshi, Deepak Ranjan Nayak, Dibyasundar Das, Yudong Zhang
Recent years have witnessed a rise in employing deep learning methods, especially convolutional neural networks (CNNs) for detection of COVID-19 cases using chest CT scans. Most of the state-of-the-art models demand a huge amount of parameters which often suffer from overfitting in the presence of limited training samples such as chest CT data and thereby, reducing the detection performance. To handle these issues, in this paper, a lightweight multi-scale CNN called LiMS-Net is proposed. The LiMS-Net contains two feature learning blocks where, in each block, filters of different sizes are applied in parallel to derive multi-scale features from the suspicious regions and an additional filter is subsequently employed to capture discriminant features. The model has only 2.53M parameters and therefore, requires low computational cost and memory space when compared to pretrained CNN architectures. Comprehensive experiments are carried out using a publicly available COVID-19 CT dataset and the results demonstrate that the proposed model achieves higher performance than many pretrained CNN models and state-of-the-art methods even in the presence of limited CT data. Our model achieves an accuracy of 92.11% and an F1-score of 92.59% for detection of COVID-19 from CT scans. Further, the results on a relatively larger CT dataset indicate the effectiveness of the proposed model.
{"title":"LiMS-Net: A Lightweight Multi-Scale CNN for COVID-19 Detection from Chest CT Scans","authors":"A. Joshi, Deepak Ranjan Nayak, Dibyasundar Das, Yudong Zhang","doi":"10.1145/3551647","DOIUrl":"https://doi.org/10.1145/3551647","url":null,"abstract":"Recent years have witnessed a rise in employing deep learning methods, especially convolutional neural networks (CNNs) for detection of COVID-19 cases using chest CT scans. Most of the state-of-the-art models demand a huge amount of parameters which often suffer from overfitting in the presence of limited training samples such as chest CT data and thereby, reducing the detection performance. To handle these issues, in this paper, a lightweight multi-scale CNN called LiMS-Net is proposed. The LiMS-Net contains two feature learning blocks where, in each block, filters of different sizes are applied in parallel to derive multi-scale features from the suspicious regions and an additional filter is subsequently employed to capture discriminant features. The model has only 2.53M parameters and therefore, requires low computational cost and memory space when compared to pretrained CNN architectures. Comprehensive experiments are carried out using a publicly available COVID-19 CT dataset and the results demonstrate that the proposed model achieves higher performance than many pretrained CNN models and state-of-the-art methods even in the presence of limited CT data. Our model achieves an accuracy of 92.11% and an F1-score of 92.59% for detection of COVID-19 from CT scans. Further, the results on a relatively larger CT dataset indicate the effectiveness of the proposed model.","PeriodicalId":45274,"journal":{"name":"ACM Transactions on Management Information Systems","volume":" ","pages":"1 - 17"},"PeriodicalIF":2.5,"publicationDate":"2022-07-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43931145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kimia Ameri, M. Hempel, H. Sharif, Juan Lopez, K. Perumalla
There is an urgent need in many critical infrastructure sectors, including the energy sector, for attaining detailed insights into cybersecurity features and compliance with cybersecurity requirements related to their Operational Technology (OT) deployments. Frequent feature changes of OT devices interfere with this need, posing a great risk to customers. One effective way to address this challenge is via a semi-automated cyber-physical security assurance approach, which enables verification and validation of the OT device cybersecurity claims against actual capabilities, both pre- and post-deployment. To realize this approach, this article presents new methodology and algorithms to automatically identify cybersecurity-related claims expressed in natural language form in ICS device documents. We developed an identification process that employs natural language processing (NLP) techniques with the goal of semi-automated vetting of detected claims against their device implementation. We also present our novel NLP components for verifying feature claims against relevant cybersecurity requirements. The verification pipeline includes components such as automated vendor identification, device document curation, feature claim identification utilizing sentiment analysis for conflict resolution, and reporting of features that are claimed to be supported or indicated as unsupported. Our novel matching engine represents the first automated information system available in the cybersecurity domain that directly aids the generation of ICS compliance reports.
{"title":"Design of a Novel Information System for Semi-automated Management of Cybersecurity in Industrial Control Systems","authors":"Kimia Ameri, M. Hempel, H. Sharif, Juan Lopez, K. Perumalla","doi":"10.1145/3546580","DOIUrl":"https://doi.org/10.1145/3546580","url":null,"abstract":"There is an urgent need in many critical infrastructure sectors, including the energy sector, for attaining detailed insights into cybersecurity features and compliance with cybersecurity requirements related to their Operational Technology (OT) deployments. Frequent feature changes of OT devices interfere with this need, posing a great risk to customers. One effective way to address this challenge is via a semi-automated cyber-physical security assurance approach, which enables verification and validation of the OT device cybersecurity claims against actual capabilities, both pre- and post-deployment. To realize this approach, this article presents new methodology and algorithms to automatically identify cybersecurity-related claims expressed in natural language form in ICS device documents. We developed an identification process that employs natural language processing (NLP) techniques with the goal of semi-automated vetting of detected claims against their device implementation. We also present our novel NLP components for verifying feature claims against relevant cybersecurity requirements. The verification pipeline includes components such as automated vendor identification, device document curation, feature claim identification utilizing sentiment analysis for conflict resolution, and reporting of features that are claimed to be supported or indicated as unsupported. Our novel matching engine represents the first automated information system available in the cybersecurity domain that directly aids the generation of ICS compliance reports.","PeriodicalId":45274,"journal":{"name":"ACM Transactions on Management Information Systems","volume":"14 1","pages":"1 - 35"},"PeriodicalIF":2.5,"publicationDate":"2022-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49565271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Nowadays, the environments of smart systems for Industry 4.0 and Internet of Things are experiencing fast industrial upgrading. Big data technologies such as design making, event detection, and classification are developed to help manufacturing organizations to achieve smart systems. By applying data analysis, the potential values of rich data can be maximized, which will help manufacturing organizations to finish another round of upgrading. In this article, we propose two new algorithms with respect to big data analysis, namely UFCgen and UFCfast. Both algorithms are designed to collect three types of patterns to help people determine the market positions for different product combinations. We compare these algorithms on various types of datasets, both real and synthetic. The experimental results show that both algorithms can successfully achieve pattern classification by utilizing three different types of interesting patterns from all candidate patterns based on user-specified thresholds of utility and frequency. Furthermore, the list-based UFCfast algorithm outperforms the levelwise-based UFCgen algorithm in terms of both execution time and memory consumption.
{"title":"Smart System: Joint Utility and Frequency for Pattern Classification","authors":"Qi-Yuan Lin, Wensheng Gan, Yongdong Wu, Jiahui Chen, Chien-Ming Chen","doi":"10.1145/3531480","DOIUrl":"https://doi.org/10.1145/3531480","url":null,"abstract":"Nowadays, the environments of smart systems for Industry 4.0 and Internet of Things are experiencing fast industrial upgrading. Big data technologies such as design making, event detection, and classification are developed to help manufacturing organizations to achieve smart systems. By applying data analysis, the potential values of rich data can be maximized, which will help manufacturing organizations to finish another round of upgrading. In this article, we propose two new algorithms with respect to big data analysis, namely UFCgen and UFCfast. Both algorithms are designed to collect three types of patterns to help people determine the market positions for different product combinations. We compare these algorithms on various types of datasets, both real and synthetic. The experimental results show that both algorithms can successfully achieve pattern classification by utilizing three different types of interesting patterns from all candidate patterns based on user-specified thresholds of utility and frequency. Furthermore, the list-based UFCfast algorithm outperforms the levelwise-based UFCgen algorithm in terms of both execution time and memory consumption.","PeriodicalId":45274,"journal":{"name":"ACM Transactions on Management Information Systems","volume":"13 1","pages":"1 - 24"},"PeriodicalIF":2.5,"publicationDate":"2022-06-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47619211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we show textual data from firm-related events in news articles can effectively predict various firm financial ratios, with or without historical financial ratios. We exploit state-of-the-art neural architectures, including pseudo-event embeddings, Long Short-Term Memory Networks, and attention mechanisms. Our news-powered deep learning models are shown to outperform standard econometric models operating on precise accounting historical data. We also observe forecasting quality improvement when integrating textual and numerical data streams. In addition, we provide in-depth case studies for model explainability and transparency. Our forecasting models, model attention maps, and firm embeddings benefit various stakeholders with quality predictions and explainable insights. Our proposed models can be applied both when numerically historical data is or is not available.
{"title":"Read the News, Not the Books: Forecasting Firms’ Long-term Financial Performance via Deep Text Mining","authors":"Shuang (Sophie) Zhai, Zhu Zhang","doi":"10.1145/3533018","DOIUrl":"https://doi.org/10.1145/3533018","url":null,"abstract":"In this paper, we show textual data from firm-related events in news articles can effectively predict various firm financial ratios, with or without historical financial ratios. We exploit state-of-the-art neural architectures, including pseudo-event embeddings, Long Short-Term Memory Networks, and attention mechanisms. Our news-powered deep learning models are shown to outperform standard econometric models operating on precise accounting historical data. We also observe forecasting quality improvement when integrating textual and numerical data streams. In addition, we provide in-depth case studies for model explainability and transparency. Our forecasting models, model attention maps, and firm embeddings benefit various stakeholders with quality predictions and explainable insights. Our proposed models can be applied both when numerically historical data is or is not available.","PeriodicalId":45274,"journal":{"name":"ACM Transactions on Management Information Systems","volume":"14 1","pages":"1 - 37"},"PeriodicalIF":2.5,"publicationDate":"2022-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47603660","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Christian Janiesch, Marcus Fischer, Florian Imgrund, Adrian Hofmann, A. Winkelmann
Enabling Internet access while taking load of mobile networks, the concept of Wi-Fi sharing holds much potential. While trust-based concepts require a trusted intermediary and cannot prevent malicious behavior, for example, conducted through fake profiles, security-based approaches lack adequate accounting mechanisms and coverage. Against this backdrop, we develop a Wi-Fi sharing architecture based on blockchain technology and payment channel networks. Our contribution is twofold: First, we present a comprehensive collection of design principles for workable Wi-Fi sharing networks. Second, we propose and evaluate a reference architecture that augments current approaches with adequate accounting mechanisms and facilitates performance, scalability, security, and participant satisfaction.
{"title":"An Architecture Using Payment Channel Networks for Blockchain-based Wi-Fi Sharing","authors":"Christian Janiesch, Marcus Fischer, Florian Imgrund, Adrian Hofmann, A. Winkelmann","doi":"10.1145/3529097","DOIUrl":"https://doi.org/10.1145/3529097","url":null,"abstract":"Enabling Internet access while taking load of mobile networks, the concept of Wi-Fi sharing holds much potential. While trust-based concepts require a trusted intermediary and cannot prevent malicious behavior, for example, conducted through fake profiles, security-based approaches lack adequate accounting mechanisms and coverage. Against this backdrop, we develop a Wi-Fi sharing architecture based on blockchain technology and payment channel networks. Our contribution is twofold: First, we present a comprehensive collection of design principles for workable Wi-Fi sharing networks. Second, we propose and evaluate a reference architecture that augments current approaches with adequate accounting mechanisms and facilitates performance, scalability, security, and participant satisfaction.","PeriodicalId":45274,"journal":{"name":"ACM Transactions on Management Information Systems","volume":" ","pages":"1 - 24"},"PeriodicalIF":2.5,"publicationDate":"2022-04-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47820266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}