Multi-task learning aims to improve the generalization performance of related tasks based on simultaneous learning where prediction models share information. Recently, identifying significant feature interaction attracts more interests because of its practical importance. We propose a second-order interaction method for multi-task learning, which identifies significant linear and interaction terms. We develop a sparse tensor decomposition based on a feature augmentation and a symmetrization trick to express the prediction models of related tasks as the linear combinations of the shared parameters. We show that the proposed method could generate diverse relationships between linear and interaction terms. In minimizing the resulting multiconvex objective function, we select an initial value by deriving unbiased estimators and proposing a tensor decomposition. Experiments on synthetic and benchmark datasets demonstrate the effectiveness of the proposed method.
{"title":"Sparse Tensor Decomposition for Multi-task Interaction Selection","authors":"Jun-Yong Jeong, C. Jun","doi":"10.1109/ICBK.2019.00022","DOIUrl":"https://doi.org/10.1109/ICBK.2019.00022","url":null,"abstract":"Multi-task learning aims to improve the generalization performance of related tasks based on simultaneous learning where prediction models share information. Recently, identifying significant feature interaction attracts more interests because of its practical importance. We propose a second-order interaction method for multi-task learning, which identifies significant linear and interaction terms. We develop a sparse tensor decomposition based on a feature augmentation and a symmetrization trick to express the prediction models of related tasks as the linear combinations of the shared parameters. We show that the proposed method could generate diverse relationships between linear and interaction terms. In minimizing the resulting multiconvex objective function, we select an initial value by deriving unbiased estimators and proposing a tensor decomposition. Experiments on synthetic and benchmark datasets demonstrate the effectiveness of the proposed method.","PeriodicalId":383917,"journal":{"name":"2019 IEEE International Conference on Big Knowledge (ICBK)","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116590213","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yao Pan, Ian Chen, F. Brasileiro, G. Jayaputera, R. Sinnott
Compared to the traditional approach of using virtual machines as the basis for the development and deployment of applications running in Cloud-based infrastructures, container technology provides developers with a higher degree of portability and availability, allowing developers to build and deploy their applications in a much more efficient and flexible manner. A number of tools have been proposed to orchestrate complex applications comprising multiple containers requiring continuous monitoring and management actions to meet application-oriented and non-functional requirements. Different container orchestration tools provide different features that incur different overheads. As such, it is not always easy for developers to choose the orchestration tool that will best suit their needs. In this paper we compare the benefits and overheads incurred by the most popular open source container orchestration tools currently available, namely: Kubernetes and Docker in Swarm mode. We undertake a number of benchmarking exercises from well-known benchmarking tools to evaluate the performance overheads of container orchestration tools and identify their pros and cons more generally. The results show that the overall performance of Kubernetes is slightly worse than that of Docker in Swarm mode. However, Docker in Swarm mode is not as flexible or powerful as Kubernetes in more complex situations.
{"title":"A Performance Comparison of Cloud-Based Container Orchestration Tools","authors":"Yao Pan, Ian Chen, F. Brasileiro, G. Jayaputera, R. Sinnott","doi":"10.1109/ICBK.2019.00033","DOIUrl":"https://doi.org/10.1109/ICBK.2019.00033","url":null,"abstract":"Compared to the traditional approach of using virtual machines as the basis for the development and deployment of applications running in Cloud-based infrastructures, container technology provides developers with a higher degree of portability and availability, allowing developers to build and deploy their applications in a much more efficient and flexible manner. A number of tools have been proposed to orchestrate complex applications comprising multiple containers requiring continuous monitoring and management actions to meet application-oriented and non-functional requirements. Different container orchestration tools provide different features that incur different overheads. As such, it is not always easy for developers to choose the orchestration tool that will best suit their needs. In this paper we compare the benefits and overheads incurred by the most popular open source container orchestration tools currently available, namely: Kubernetes and Docker in Swarm mode. We undertake a number of benchmarking exercises from well-known benchmarking tools to evaluate the performance overheads of container orchestration tools and identify their pros and cons more generally. The results show that the overall performance of Kubernetes is slightly worse than that of Docker in Swarm mode. However, Docker in Swarm mode is not as flexible or powerful as Kubernetes in more complex situations.","PeriodicalId":383917,"journal":{"name":"2019 IEEE International Conference on Big Knowledge (ICBK)","volume":"253 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121135955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In a signed graph, the edges have binary labels that indicate positive or negative relationships. In scenarios where some of the edge signs are unavailable, conventional learning methods will be ineffective. In contrast, transfer learning methods can improve the learning performance by using another network with adequate signs. In a social network, the problem often facedis that the network dimension is too high. Nonnegative Matrix Factorization (NMF) is a widely used matrix decomposition method to decrease the high dimensionality. However, the matrix that is generated may not be sparse enough, which can impact its representation ability. To address this problem, we propose Orthogonal Graph Regularized Nonnegative Matrix Factorization (OGNMF) to extract latent features from social networks and prove its convergence theoretically. Based on TrAdaBoost, a classical transfer learning algorithm, the experimental results using benchmark datasets demonstrate that our method has superior performance to the other baseline methods.
{"title":"Edge Sign Prediction Based on Orthogonal Graph Regularized Nonnegative Matrix Factorization for Transfer Learning","authors":"Junwu Yu, Shuyin Xia, Guoyin Wang","doi":"10.1109/ICBK.2019.00050","DOIUrl":"https://doi.org/10.1109/ICBK.2019.00050","url":null,"abstract":"In a signed graph, the edges have binary labels that indicate positive or negative relationships. In scenarios where some of the edge signs are unavailable, conventional learning methods will be ineffective. In contrast, transfer learning methods can improve the learning performance by using another network with adequate signs. In a social network, the problem often facedis that the network dimension is too high. Nonnegative Matrix Factorization (NMF) is a widely used matrix decomposition method to decrease the high dimensionality. However, the matrix that is generated may not be sparse enough, which can impact its representation ability. To address this problem, we propose Orthogonal Graph Regularized Nonnegative Matrix Factorization (OGNMF) to extract latent features from social networks and prove its convergence theoretically. Based on TrAdaBoost, a classical transfer learning algorithm, the experimental results using benchmark datasets demonstrate that our method has superior performance to the other baseline methods.","PeriodicalId":383917,"journal":{"name":"2019 IEEE International Conference on Big Knowledge (ICBK)","volume":"81 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132979130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A large amount of data streams in the form of texts and images has been emerging in many real-world applications. These data streams often present the characteristics such as multi-labels, label missing and new class emerging, which makes the existing data stream classification algorithm face the challenges in precision space and time performance. This is because, on the one hand, it is known that data stream classification algorithms are mostly trained on all labeled single-class data, while there are a large amount of unlabeled data and few labeled data due to it is difficult to obtain labels in the real world. On the other hand, many of existing multi-label data stream classification algorithms mostly focused on the classification with all labeled data and without emerging new classes, and there are few semi-supervised methods. Therefore, this paper proposes a semi-supervised ensemble classification algorithm for multi-label data streams based on co-training. Firstly, the algorithm uses the sliding window mechanism to partition the data stream into data chunks. On the former w data chucks, the multi-label semi-supervised classification algorithm COINS based on co-training is used to training a base classifier on each chunk, and then an ensemble model with w COINS classifiers is generated ensemble model to adapt to the environment of data stream with a large number of unlabeled data. Meanwhile, a new class emerging detection mechanism is introduced, and the w+1 data chunk is predicted by the ensemble model to detect whether there is a new class emerging. When a new label is detected, the classifier is retrained on the current data chunk, and the ensemble model is updated. Finally, experimental results on five real data sets show that: as compared with the classical algorithms, the proposed approach can improve the classification accuracy of multi-label data streams with a large number of missing labels and new labels emerging.
{"title":"Co-training Based on Semi-Supervised Ensemble Classification Approach for Multi-label Data Stream","authors":"Zhe Chu, Peipei Li, Xuegang Hu","doi":"10.1109/ICBK.2019.00016","DOIUrl":"https://doi.org/10.1109/ICBK.2019.00016","url":null,"abstract":"A large amount of data streams in the form of texts and images has been emerging in many real-world applications. These data streams often present the characteristics such as multi-labels, label missing and new class emerging, which makes the existing data stream classification algorithm face the challenges in precision space and time performance. This is because, on the one hand, it is known that data stream classification algorithms are mostly trained on all labeled single-class data, while there are a large amount of unlabeled data and few labeled data due to it is difficult to obtain labels in the real world. On the other hand, many of existing multi-label data stream classification algorithms mostly focused on the classification with all labeled data and without emerging new classes, and there are few semi-supervised methods. Therefore, this paper proposes a semi-supervised ensemble classification algorithm for multi-label data streams based on co-training. Firstly, the algorithm uses the sliding window mechanism to partition the data stream into data chunks. On the former w data chucks, the multi-label semi-supervised classification algorithm COINS based on co-training is used to training a base classifier on each chunk, and then an ensemble model with w COINS classifiers is generated ensemble model to adapt to the environment of data stream with a large number of unlabeled data. Meanwhile, a new class emerging detection mechanism is introduced, and the w+1 data chunk is predicted by the ensemble model to detect whether there is a new class emerging. When a new label is detected, the classifier is retrained on the current data chunk, and the ensemble model is updated. Finally, experimental results on five real data sets show that: as compared with the classical algorithms, the proposed approach can improve the classification accuracy of multi-label data streams with a large number of missing labels and new labels emerging.","PeriodicalId":383917,"journal":{"name":"2019 IEEE International Conference on Big Knowledge (ICBK)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123579648","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The genetic algorithm is a classical evolutionary algorithm that mainly consists of mutation and crossover operations. Existing genetic algorithms implement these two operations on the current population and rarely use the spatial information that has been traversed. To address this problem, this paper proposes an improved genetic algorithm that divides the feasible region into multiple granularities. It is called the multi-granularity genetic algorithm (MGGA). This algorithm adopts a multi-granularity space strategy based on a random tree, which accelerates the searching speed of the algorithm in the multi-granular space. Firstly, a hierarchical strategy is applied to the current population to accelerate the generation of good individuals. Then, the multi-granularity space strategy is used to increase the search intensity of the sparse space and the subspace, where the current optimal solution is located. The experimental results on six classical functions demonstrate that the proposed MGGA can improve the convergence speed and solution accuracy and reduce the number of calculations required for the fitness value.
{"title":"A Multi-granularity Genetic Algorithm","authors":"Caoxiao Li, Shuyin Xia, Zizhong Chen, Guoyin Wang","doi":"10.1109/ICBK.2019.00027","DOIUrl":"https://doi.org/10.1109/ICBK.2019.00027","url":null,"abstract":"The genetic algorithm is a classical evolutionary algorithm that mainly consists of mutation and crossover operations. Existing genetic algorithms implement these two operations on the current population and rarely use the spatial information that has been traversed. To address this problem, this paper proposes an improved genetic algorithm that divides the feasible region into multiple granularities. It is called the multi-granularity genetic algorithm (MGGA). This algorithm adopts a multi-granularity space strategy based on a random tree, which accelerates the searching speed of the algorithm in the multi-granular space. Firstly, a hierarchical strategy is applied to the current population to accelerate the generation of good individuals. Then, the multi-granularity space strategy is used to increase the search intensity of the sparse space and the subspace, where the current optimal solution is located. The experimental results on six classical functions demonstrate that the proposed MGGA can improve the convergence speed and solution accuracy and reduce the number of calculations required for the fitness value.","PeriodicalId":383917,"journal":{"name":"2019 IEEE International Conference on Big Knowledge (ICBK)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122646813","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Gaussian mixture models are widely used in a diverse range of research fields. If the number of components and dimensions grow high, the computational costs for answering queries become unreasonably high for practical use. Therefore approximation approaches are necessary to make complex Gaussian mixture models more usable. The need for approximation approaches is also driven by the relatively recent representations that theoretically allow unlimited number of mixture components (e.g. nonparametric Bayesian networks or infinite mixture models). In this paper we introduce an approximate inference algorithm that splits the existing algorithm for query answering into two steps and uses the knowledge from the first step to reduce unnecessary calculations in the second step while maintaining a defined error bound. In highly complex mixture models we observed significant time savings even with low error bounds.
{"title":"Approximate Query Answering in Complex Gaussian Mixture Models","authors":"Mattis Hartwig, M. Gehrke, R. Möller","doi":"10.1109/ICBK.2019.00019","DOIUrl":"https://doi.org/10.1109/ICBK.2019.00019","url":null,"abstract":"Gaussian mixture models are widely used in a diverse range of research fields. If the number of components and dimensions grow high, the computational costs for answering queries become unreasonably high for practical use. Therefore approximation approaches are necessary to make complex Gaussian mixture models more usable. The need for approximation approaches is also driven by the relatively recent representations that theoretically allow unlimited number of mixture components (e.g. nonparametric Bayesian networks or infinite mixture models). In this paper we introduce an approximate inference algorithm that splits the existing algorithm for query answering into two steps and uses the knowledge from the first step to reduce unnecessary calculations in the second step while maintaining a defined error bound. In highly complex mixture models we observed significant time savings even with low error bounds.","PeriodicalId":383917,"journal":{"name":"2019 IEEE International Conference on Big Knowledge (ICBK)","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133545568","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Efficient defects segmentation from photovoltaic (PV) electroluminescence (EL) images is a crucial process due to the random inhomogeneous background and unbalanced crack non-crack pixel distribution. The automatic defect inspection of solar cells greatly influences the quality of photovoltaic cells, so it is necessary to examine defects efficiently and accurately. In this paper we propose a novel end to end deep learning-based architecture for defects segmentation. In the proposed architecture we introduce a novel global attention to extract rich context information. Further, we modified the U-net by adding dilated convolution at both encoder and decoder side with skip connections from early layers to later layers at encoder side. Then the proposed global attention is incorporated into the modified U-net. The model is trained and tested on Photovoltaic electroluminescence 512x512 images dataset and the results are recorded using mean Intersection over union (IOU). In experiments, we reported the results and made comparison between the proposed model and other state of the art methods. The mean IOU of proposed method is 0.6477 with pixel accuracy 0.9738 which is better than the state-of-the-art methods. We demonstrate that the proposed method can give effective results with smaller dataset and is computationally efficient.
由于光伏电致发光图像背景随机不均匀、裂纹非裂纹像元分布不平衡等特点,对其进行有效的缺陷分割至关重要。太阳能电池的缺陷自动检测对光伏电池的质量影响很大,因此对缺陷进行高效、准确的检测是十分必要的。本文提出了一种新的基于端到端深度学习的缺陷分割体系结构。在提出的架构中,我们引入了一种新的全局关注来提取丰富的上下文信息。此外,我们通过在编码器和解码器侧添加扩展卷积来修改U-net,并在编码器侧从早期层到后期层进行跳过连接。然后将建议的全球关注纳入改进的U-net中。在光伏电致发光512x512图像数据集上对该模型进行训练和测试,并使用平均交汇超过联合(Intersection over union, IOU)记录结果。在实验中,我们报告了结果,并将所提出的模型与其他最先进的方法进行了比较。该方法的平均IOU为0.6477,像素精度为0.9738,优于现有方法。结果表明,该方法可以在较小的数据集上得到有效的结果,并且计算效率高。
{"title":"U-Net Based Defects Inspection in Photovoltaic Electroluminecscence Images","authors":"Muhammad Rameez Ur Rahman, Haiyong Chen, Wen Xi","doi":"10.1109/ICBK.2019.00036","DOIUrl":"https://doi.org/10.1109/ICBK.2019.00036","url":null,"abstract":"Efficient defects segmentation from photovoltaic (PV) electroluminescence (EL) images is a crucial process due to the random inhomogeneous background and unbalanced crack non-crack pixel distribution. The automatic defect inspection of solar cells greatly influences the quality of photovoltaic cells, so it is necessary to examine defects efficiently and accurately. In this paper we propose a novel end to end deep learning-based architecture for defects segmentation. In the proposed architecture we introduce a novel global attention to extract rich context information. Further, we modified the U-net by adding dilated convolution at both encoder and decoder side with skip connections from early layers to later layers at encoder side. Then the proposed global attention is incorporated into the modified U-net. The model is trained and tested on Photovoltaic electroluminescence 512x512 images dataset and the results are recorded using mean Intersection over union (IOU). In experiments, we reported the results and made comparison between the proposed model and other state of the art methods. The mean IOU of proposed method is 0.6477 with pixel accuracy 0.9738 which is better than the state-of-the-art methods. We demonstrate that the proposed method can give effective results with smaller dataset and is computationally efficient.","PeriodicalId":383917,"journal":{"name":"2019 IEEE International Conference on Big Knowledge (ICBK)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121712107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"[Copyright notice]","authors":"","doi":"10.1109/icbk.2019.00003","DOIUrl":"https://doi.org/10.1109/icbk.2019.00003","url":null,"abstract":"","PeriodicalId":383917,"journal":{"name":"2019 IEEE International Conference on Big Knowledge (ICBK)","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124407366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuxin Jin, Ze Yang, Ying He, Xianyu Bao, Gongqing Wu
Classification is a hot topic in such fields as machine learning and data mining. The traditional approach of machine learning is to find a classifier closest to the real classification function, while ensemble classification is to integrate the results of base classifiers, then make an overall prediction. Compared to using a single classifier, ensemble classification can significantly improve the generalization of the learning system in most cases. However, the existing ensemble classification methods rarely consider the weight of the classifier, and there are few methods to consider updating the weights dynamically. In this paper, we are inspired by the idea of truth discovery and propose a new ensemble classification method based on the truth discovery (named ECTD). As far as we know, we are the first to apply the idea of truth discovery in the field of ensemble learning. Experimental results demonstrate that the proposed method performs well in ensemble classification.
{"title":"Ensemble Classification Method Based on Truth Discovery","authors":"Yuxin Jin, Ze Yang, Ying He, Xianyu Bao, Gongqing Wu","doi":"10.1109/ICBK.2019.00024","DOIUrl":"https://doi.org/10.1109/ICBK.2019.00024","url":null,"abstract":"Classification is a hot topic in such fields as machine learning and data mining. The traditional approach of machine learning is to find a classifier closest to the real classification function, while ensemble classification is to integrate the results of base classifiers, then make an overall prediction. Compared to using a single classifier, ensemble classification can significantly improve the generalization of the learning system in most cases. However, the existing ensemble classification methods rarely consider the weight of the classifier, and there are few methods to consider updating the weights dynamically. In this paper, we are inspired by the idea of truth discovery and propose a new ensemble classification method based on the truth discovery (named ECTD). As far as we know, we are the first to apply the idea of truth discovery in the field of ensemble learning. Experimental results demonstrate that the proposed method performs well in ensemble classification.","PeriodicalId":383917,"journal":{"name":"2019 IEEE International Conference on Big Knowledge (ICBK)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132681392","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Marketing connects product/service providers and customers. It runs through the whole life cycle of an organization (such as a manufacturing enterprise or a public safety department), including market opportunities, market penetration, market developments, product/service innovation, and possibly market renovation. Marketing intelligence (MI) seeks to facilitate a positive cycle among market opportunities, market penetration, and market developments, not just intelligent marketing. It applies AI, Big Data and CRM technologies to analyze huge amounts of heterogeneous multi-source data, and supports intelligent decision-making by mining operational patterns from production and consumption data, and providing data insights, customer profiling, brand analysis, personalized advertising, product/service recommendations, supply chain integration and inventory management.
{"title":"ICDM/ICBK 2019 Panel: Marketing Intelligence – Let Marketing Drive Efficiency and Innovation","authors":"Xindong Wu","doi":"10.1109/icbk.2019.00008","DOIUrl":"https://doi.org/10.1109/icbk.2019.00008","url":null,"abstract":"Marketing connects product/service providers and customers. It runs through the whole life cycle of an organization (such as a manufacturing enterprise or a public safety department), including market opportunities, market penetration, market developments, product/service innovation, and possibly market renovation. Marketing intelligence (MI) seeks to facilitate a positive cycle among market opportunities, market penetration, and market developments, not just intelligent marketing. It applies AI, Big Data and CRM technologies to analyze huge amounts of heterogeneous multi-source data, and supports intelligent decision-making by mining operational patterns from production and consumption data, and providing data insights, customer profiling, brand analysis, personalized advertising, product/service recommendations, supply chain integration and inventory management.","PeriodicalId":383917,"journal":{"name":"2019 IEEE International Conference on Big Knowledge (ICBK)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117230477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}