Pub Date : 2020-11-01DOI: 10.1109/ICDMW51313.2020.00107
Kyeong-Min Lee, InA Kim, Kyu-Chul Lee
In a smart grid, various types of queries such as ad-hoc queries and analytic queries are requested for data. There is a limit to query evaluation based on a single node database engines because queries are requested for a large scale of data in the smart grid. In this paper, to improve the performance of retrieving a large scale of data in the smart grid environment, we propose a DQN-based join order optimization model on Spark SQL. The model learns the actual processing time of queries that are evaluated on Spark SQL, not the estimated costs. By learning the optimal join orders from previous experiences, we optimize the join orders with similar performance to Spark SQL without collecting and computing the statistics of an input data set.
{"title":"DQN-based Join Order Optimization by Learning Experiences of Running Queries on Spark SQL","authors":"Kyeong-Min Lee, InA Kim, Kyu-Chul Lee","doi":"10.1109/ICDMW51313.2020.00107","DOIUrl":"https://doi.org/10.1109/ICDMW51313.2020.00107","url":null,"abstract":"In a smart grid, various types of queries such as ad-hoc queries and analytic queries are requested for data. There is a limit to query evaluation based on a single node database engines because queries are requested for a large scale of data in the smart grid. In this paper, to improve the performance of retrieving a large scale of data in the smart grid environment, we propose a DQN-based join order optimization model on Spark SQL. The model learns the actual processing time of queries that are evaluated on Spark SQL, not the estimated costs. By learning the optimal join orders from previous experiences, we optimize the join orders with similar performance to Spark SQL without collecting and computing the statistics of an input data set.","PeriodicalId":426846,"journal":{"name":"2020 International Conference on Data Mining Workshops (ICDMW)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125880840","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anomaly Detection of Multivariate Time Series is an intensive research topic in data mining, especially with the rise of Industry 4.0. However, few existing approaches are taken under high acquisition scene, and only a minority of them took periodicity of time series into consideration. In this paper, we propose a novel network Dual-window RNN-CNN to detect periodic time series anomalies of high acquisition frequency scene in IoT. We first apply Dual-window to segment time series according to the periodicity of data and solve the time alignment problem. Then we utilize Multi-head GRU to compress the data volume and extract temporal features sensor by sensor, which not only solves the problems caused by high acquisition but also adds more flexible transfer ability to our network. In order to improve the robustness of our network in different periodic scenes of IoT, three different kinds of GRU mode are put forward. Finally we use CNN-based Autoencoder to locate anomalies according to both temporal and spatial dependencies. It should also be note that Multi-head GRU broadens the receptive field of CNN-based Autoencoder. Two parts of experiment were carried to verify the validity of Dual-Window RNN-CNN. The first part is conducted on UCR/UEA benchmark to discuss the performance of Dual-Window RNN-CNN under different structures and hyper parameters, for datasets in UCR/UAE benchmark contain enough timestamps to monitor the high acquisition and periodicity in IoT. The second part is conducted on Yahoo Webscope benchmark and NAB to compare our network with other classic time series anomaly detection approaches. Experiment results confirm that our Dual-Window RNN-CNN outperforms other approaches in anomaly detection of periodic multivariate time series, demonstrating the advantages of our network in high acquisition scene.
{"title":"Anomaly Detection of Periodic Multivariate Time Series under High Acquisition Frequency Scene in IoT","authors":"Shuo Zhang, Xiaofei Chen, Jiayuan Chen, Qiao Jiang, Hejiao Huang","doi":"10.1109/ICDMW51313.2020.00078","DOIUrl":"https://doi.org/10.1109/ICDMW51313.2020.00078","url":null,"abstract":"Anomaly Detection of Multivariate Time Series is an intensive research topic in data mining, especially with the rise of Industry 4.0. However, few existing approaches are taken under high acquisition scene, and only a minority of them took periodicity of time series into consideration. In this paper, we propose a novel network Dual-window RNN-CNN to detect periodic time series anomalies of high acquisition frequency scene in IoT. We first apply Dual-window to segment time series according to the periodicity of data and solve the time alignment problem. Then we utilize Multi-head GRU to compress the data volume and extract temporal features sensor by sensor, which not only solves the problems caused by high acquisition but also adds more flexible transfer ability to our network. In order to improve the robustness of our network in different periodic scenes of IoT, three different kinds of GRU mode are put forward. Finally we use CNN-based Autoencoder to locate anomalies according to both temporal and spatial dependencies. It should also be note that Multi-head GRU broadens the receptive field of CNN-based Autoencoder. Two parts of experiment were carried to verify the validity of Dual-Window RNN-CNN. The first part is conducted on UCR/UEA benchmark to discuss the performance of Dual-Window RNN-CNN under different structures and hyper parameters, for datasets in UCR/UAE benchmark contain enough timestamps to monitor the high acquisition and periodicity in IoT. The second part is conducted on Yahoo Webscope benchmark and NAB to compare our network with other classic time series anomaly detection approaches. Experiment results confirm that our Dual-Window RNN-CNN outperforms other approaches in anomaly detection of periodic multivariate time series, demonstrating the advantages of our network in high acquisition scene.","PeriodicalId":426846,"journal":{"name":"2020 International Conference on Data Mining Workshops (ICDMW)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124852819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDMW51313.2020.00045
JunYong Tong, Nick Torenvliet
This paper proposes a streaming anomaly detection algorithm using variational Bayesian non-parametric methods. We extend the use of Dirichlet process mixture models to anomaly detection for online streaming data through the use of streaming variational bayes method and a cohesion function. Using our algorithm, we were able to update model parameters sequentially near real-time, using a fixed amount of computational resources. The algorithm was able to capture the temporal dynamics of the data and enabled good online anomaly detection. We demonstrate the performance, and discuss results, of the algorithm on an industrial datasets with anomalies provided by a local utility.
{"title":"Temporally-Reweighted Dirichlet Process Mixture Anomaly Detector","authors":"JunYong Tong, Nick Torenvliet","doi":"10.1109/ICDMW51313.2020.00045","DOIUrl":"https://doi.org/10.1109/ICDMW51313.2020.00045","url":null,"abstract":"This paper proposes a streaming anomaly detection algorithm using variational Bayesian non-parametric methods. We extend the use of Dirichlet process mixture models to anomaly detection for online streaming data through the use of streaming variational bayes method and a cohesion function. Using our algorithm, we were able to update model parameters sequentially near real-time, using a fixed amount of computational resources. The algorithm was able to capture the temporal dynamics of the data and enabled good online anomaly detection. We demonstrate the performance, and discuss results, of the algorithm on an industrial datasets with anomalies provided by a local utility.","PeriodicalId":426846,"journal":{"name":"2020 International Conference on Data Mining Workshops (ICDMW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130792155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDMW51313.2020.00115
Ritika Pandey, P. Brantingham, Craig D. Uchida, G. Mohler
Homicide investigations generate large and diverse data in the form of witness interview transcripts, physical evidence, photographs, DNA, etc. Homicide case chronologies are summaries of these data created by investigators that consist of short text-based entries documenting specific steps taken in the investigation. A chronology tracks the evolution of an investigation, including when and how persons involved and items of evidence became part of a case. In this article we discuss a framework for creating knowledge graphs of case chronologies that may aid investigators in analyzing homicide case data and also allow for post hoc analysis of the key features that determine whether a homicide is ultimately solved. Our method consists of 1) performing named entity recognition to determine witnesses, suspects, and detectives from chronology entries 2) using keyword expansion to identify documentary, physical, and forensic evidence in each entry and 3) linking entities and evidence to construct a homicide investigation knowledge graph. We compare the performance of several choices of methodologies for these sub-tasks using homicide investigation chronologies from Los Angeles, California. We then analyze the association between network statistics of the knowledge graphs and homicide solvability.
{"title":"Building knowledge graphs of homicide investigation chronologies","authors":"Ritika Pandey, P. Brantingham, Craig D. Uchida, G. Mohler","doi":"10.1109/ICDMW51313.2020.00115","DOIUrl":"https://doi.org/10.1109/ICDMW51313.2020.00115","url":null,"abstract":"Homicide investigations generate large and diverse data in the form of witness interview transcripts, physical evidence, photographs, DNA, etc. Homicide case chronologies are summaries of these data created by investigators that consist of short text-based entries documenting specific steps taken in the investigation. A chronology tracks the evolution of an investigation, including when and how persons involved and items of evidence became part of a case. In this article we discuss a framework for creating knowledge graphs of case chronologies that may aid investigators in analyzing homicide case data and also allow for post hoc analysis of the key features that determine whether a homicide is ultimately solved. Our method consists of 1) performing named entity recognition to determine witnesses, suspects, and detectives from chronology entries 2) using keyword expansion to identify documentary, physical, and forensic evidence in each entry and 3) linking entities and evidence to construct a homicide investigation knowledge graph. We compare the performance of several choices of methodologies for these sub-tasks using homicide investigation chronologies from Los Angeles, California. We then analyze the association between network statistics of the knowledge graphs and homicide solvability.","PeriodicalId":426846,"journal":{"name":"2020 International Conference on Data Mining Workshops (ICDMW)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123822929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDMW51313.2020.00057
Geet Shingi
The number of defaults in bank loans have recently been increasing in the past years. However, the process of sanctioning the loan has still been done manually in many of the banking organizations. Dependency on human intervention and delay in results have been the biggest obstacles in this system. While implementing machine learning models for banking applications, the security of sensitive customer banking data has always been a crucial concern and with strong legislative rules in place, sharing of data with other organizations is not possible. Along with this, the loan dataset is highly imbalanced, there are very few samples of defaults as compared to repaid loans. Hence, these problems make the default prediction system difficult to learn the patterns of defaults and thus difficult to predict them. Previous machine learning-based approaches to automate the process have been training models on the same organization's data but in today's world, classifying the loan application on the data within the organizations is no longer sufficient and a feasible solution. In this paper, we propose a federated learning-based approach for the prediction of loan applications that are less likely to be repaid which helps in resolving the above mentioned issues by sharing the weight of the model which are aggregated at the central server. The federated system is coupled with Synthetic Minority Over-sampling Technique(SMOTE) to solve the problem of imbalanced training data. Further, The federated system is coupled with a weighted aggregation based on the number of samples and performance of a worker on his dataset to further augment the performance. The improved performance by our model on publicly available real-world data further validates the same. Flexible, aggregated models can prove to be crucial in keeping out the defaulters in loan applications.
{"title":"A federated learning based approach for loan defaults prediction","authors":"Geet Shingi","doi":"10.1109/ICDMW51313.2020.00057","DOIUrl":"https://doi.org/10.1109/ICDMW51313.2020.00057","url":null,"abstract":"The number of defaults in bank loans have recently been increasing in the past years. However, the process of sanctioning the loan has still been done manually in many of the banking organizations. Dependency on human intervention and delay in results have been the biggest obstacles in this system. While implementing machine learning models for banking applications, the security of sensitive customer banking data has always been a crucial concern and with strong legislative rules in place, sharing of data with other organizations is not possible. Along with this, the loan dataset is highly imbalanced, there are very few samples of defaults as compared to repaid loans. Hence, these problems make the default prediction system difficult to learn the patterns of defaults and thus difficult to predict them. Previous machine learning-based approaches to automate the process have been training models on the same organization's data but in today's world, classifying the loan application on the data within the organizations is no longer sufficient and a feasible solution. In this paper, we propose a federated learning-based approach for the prediction of loan applications that are less likely to be repaid which helps in resolving the above mentioned issues by sharing the weight of the model which are aggregated at the central server. The federated system is coupled with Synthetic Minority Over-sampling Technique(SMOTE) to solve the problem of imbalanced training data. Further, The federated system is coupled with a weighted aggregation based on the number of samples and performance of a worker on his dataset to further augment the performance. The improved performance by our model on publicly available real-world data further validates the same. Flexible, aggregated models can prove to be crucial in keeping out the defaulters in loan applications.","PeriodicalId":426846,"journal":{"name":"2020 International Conference on Data Mining Workshops (ICDMW)","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123874255","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDMW51313.2020.00110
Hyung-Jun Moon, Seok-Jun Bu, Sung-Bae Cho
In the time-series models for predicting residential energy consumption, the energy properties collected through multiple sensors usually include irregular and seasonal factors. The irregular pattern resulting from them is called peak demand, which is a major cause of performance degradation. In order to enhance the performance, we propose a convolutional-recurrent triplet network to learn and detect the demand peaks. The proposed model generates the latent space for demand peaks from data, which is transferred into convolutional neural network-long short-term memory (CNN-LSTM) to finally predict the future power demand. Experiments with the dataset of UCI household power consumption composed of a total of 2,075,259 time-series data show that the proposed model reduces the error by 23.63% and outperforms the state-of-the-art deep learning models including the CNN-LSTM. Especially, the proposed model improves the prediction performance by modeling the distribution of demand peaks in Euclidean space.
{"title":"Learning Disentangled Representation of Residential Power Demand Peak via Convolutional-Recurrent Triplet Network","authors":"Hyung-Jun Moon, Seok-Jun Bu, Sung-Bae Cho","doi":"10.1109/ICDMW51313.2020.00110","DOIUrl":"https://doi.org/10.1109/ICDMW51313.2020.00110","url":null,"abstract":"In the time-series models for predicting residential energy consumption, the energy properties collected through multiple sensors usually include irregular and seasonal factors. The irregular pattern resulting from them is called peak demand, which is a major cause of performance degradation. In order to enhance the performance, we propose a convolutional-recurrent triplet network to learn and detect the demand peaks. The proposed model generates the latent space for demand peaks from data, which is transferred into convolutional neural network-long short-term memory (CNN-LSTM) to finally predict the future power demand. Experiments with the dataset of UCI household power consumption composed of a total of 2,075,259 time-series data show that the proposed model reduces the error by 23.63% and outperforms the state-of-the-art deep learning models including the CNN-LSTM. Especially, the proposed model improves the prediction performance by modeling the distribution of demand peaks in Euclidean space.","PeriodicalId":426846,"journal":{"name":"2020 International Conference on Data Mining Workshops (ICDMW)","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122762712","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDMW51313.2020.00029
Venkataramana B. Kini, A. Manjunatha
This paper proposes and evaluates a multitask transfer learning approach to collectively optimize customer loyalty, retail revenue, and promotional revenue. Multitask neural network is employed to predict a customer's propensity to purchase within fine-grained categories. The network is then fine-tuned using transfer learning for a specific promotional campaign. Lastly, retail revenue and promotional revenue are jointly optimized conditioned on customer loyalty. Experiments are conducted using a large retail dataset that shows the efficacy of the proposed method compared to baselines used in the industry. A large retailer is currently adopting the proposed methodology in promotional campaigning owing to significant overall revenue and loyalty gains.
{"title":"Revenue Maximization using Multitask Learning for Promotion Recommendation","authors":"Venkataramana B. Kini, A. Manjunatha","doi":"10.1109/ICDMW51313.2020.00029","DOIUrl":"https://doi.org/10.1109/ICDMW51313.2020.00029","url":null,"abstract":"This paper proposes and evaluates a multitask transfer learning approach to collectively optimize customer loyalty, retail revenue, and promotional revenue. Multitask neural network is employed to predict a customer's propensity to purchase within fine-grained categories. The network is then fine-tuned using transfer learning for a specific promotional campaign. Lastly, retail revenue and promotional revenue are jointly optimized conditioned on customer loyalty. Experiments are conducted using a large retail dataset that shows the efficacy of the proposed method compared to baselines used in the industry. A large retailer is currently adopting the proposed methodology in promotional campaigning owing to significant overall revenue and loyalty gains.","PeriodicalId":426846,"journal":{"name":"2020 International Conference on Data Mining Workshops (ICDMW)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128935401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDMW51313.2020.00022
A. Ramkissoon, Shareeda Mohammed
The existence of fake news is a problem challenging today's social media enabled world. Fake news can be classified using varying methods. Predicting and detecting fake news has proven to be challenging even for machine learning algorithms. This research attempts to investigate nine such machine learning algorithms to understand their performance with Credibility Based Fake News Detection. This study uses a standard dataset with features relating to the credibility of news publishers. These features are analysed using each of these algorithms. The results of these experiments are analysed using four evaluation methodologies. The analysis reveals varying performance with the use of each of the nine methods. Based upon our selected dataset, one of these methods has proven to be most appropriate for the purpose of Credibility Based Fake News Detection.
{"title":"An Experimental Evaluation of Data Classification Models for Credibility Based Fake News Detection","authors":"A. Ramkissoon, Shareeda Mohammed","doi":"10.1109/ICDMW51313.2020.00022","DOIUrl":"https://doi.org/10.1109/ICDMW51313.2020.00022","url":null,"abstract":"The existence of fake news is a problem challenging today's social media enabled world. Fake news can be classified using varying methods. Predicting and detecting fake news has proven to be challenging even for machine learning algorithms. This research attempts to investigate nine such machine learning algorithms to understand their performance with Credibility Based Fake News Detection. This study uses a standard dataset with features relating to the credibility of news publishers. These features are analysed using each of these algorithms. The results of these experiments are analysed using four evaluation methodologies. The analysis reveals varying performance with the use of each of the nine methods. Based upon our selected dataset, one of these methods has proven to be most appropriate for the purpose of Credibility Based Fake News Detection.","PeriodicalId":426846,"journal":{"name":"2020 International Conference on Data Mining Workshops (ICDMW)","volume":"100 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121053428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDMW51313.2020.00101
Jin-Young Kim, Sung-Bae Cho
Recently, deep learning models are utilized to predict the energy consumption. However, to construct the smart grid systems, the conventional methods have limitation on explanatory power or require manual analysis. To overcome it, in this paper, we present a novel deep learning model that can infer the predicted results by calculating the correlation between the latent variables and output as well as forecast the future consumption in high performance. The proposed model is composed of 1) a main encoder that models the past energy demand, 2) a sub encoder that models electric information except global active power as the latent variable in two dimensions, 3) a predictor that maps the future demand from the concatenation of the latent variables extracted from each encoder, and 4) an explainer that provides the most significant electric information. Several experiments on a household electric energy demand dataset show that the proposed model not only has better performance than the conventional models, but also provides the ability to explain the results by analyzing the correlation of inputs, latent variables, and energy demand predicted in the form of time-series.
{"title":"Electric Energy Demand Forecasting with Explainable Time-series Modeling","authors":"Jin-Young Kim, Sung-Bae Cho","doi":"10.1109/ICDMW51313.2020.00101","DOIUrl":"https://doi.org/10.1109/ICDMW51313.2020.00101","url":null,"abstract":"Recently, deep learning models are utilized to predict the energy consumption. However, to construct the smart grid systems, the conventional methods have limitation on explanatory power or require manual analysis. To overcome it, in this paper, we present a novel deep learning model that can infer the predicted results by calculating the correlation between the latent variables and output as well as forecast the future consumption in high performance. The proposed model is composed of 1) a main encoder that models the past energy demand, 2) a sub encoder that models electric information except global active power as the latent variable in two dimensions, 3) a predictor that maps the future demand from the concatenation of the latent variables extracted from each encoder, and 4) an explainer that provides the most significant electric information. Several experiments on a household electric energy demand dataset show that the proposed model not only has better performance than the conventional models, but also provides the ability to explain the results by analyzing the correlation of inputs, latent variables, and energy demand predicted in the form of time-series.","PeriodicalId":426846,"journal":{"name":"2020 International Conference on Data Mining Workshops (ICDMW)","volume":"349 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122041042","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/ICDMW51313.2020.00127
Y. Kostyuchenko, Qingshan Jiang
The globalization of the pharmaceutical supply chain has lead to new challenges, the leading position among them is the fight against falsified and substandard pharmaceutical products. Such kind of products causes ineffective or harmful therapies all over the world. Traditional centralized technical tools can hardly satisfy the requirements of the changing industry. In this paper, we research the application of Blockchain solutions to modernize the drug supply chain and minimize the amount of the poor-quality medications.
{"title":"Blockchain Applications to combat the global trade of falsified drugs","authors":"Y. Kostyuchenko, Qingshan Jiang","doi":"10.1109/ICDMW51313.2020.00127","DOIUrl":"https://doi.org/10.1109/ICDMW51313.2020.00127","url":null,"abstract":"The globalization of the pharmaceutical supply chain has lead to new challenges, the leading position among them is the fight against falsified and substandard pharmaceutical products. Such kind of products causes ineffective or harmful therapies all over the world. Traditional centralized technical tools can hardly satisfy the requirements of the changing industry. In this paper, we research the application of Blockchain solutions to modernize the drug supply chain and minimize the amount of the poor-quality medications.","PeriodicalId":426846,"journal":{"name":"2020 International Conference on Data Mining Workshops (ICDMW)","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121258094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}