首页 > 最新文献

Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence最新文献

英文 中文
The Study of Phonological Neighborhoods in Chinese L1 and L2 Speech Production 汉语母语和第二语言语音产生的语音邻域研究
Tongtong Xie, Haiying Ye, Hongyan Wang, J. V. D. Weijer
The tongue twister paradigm was used to compare numbers and types of errors of native and non-native speakers of Chinese when producing tongue twisters. The stimuli consisted of 106 quadruples, 32 of which were transliterated from English tongue twisters, 26 of which were vocalic twisters, and 48 of which were consonant twisters. Both consonant and vowel errors were investigated (but not tone) and errors were classified as caused by either preceding or following linguistic forms (or as caused by both or neither). To enhance errors, we requested participants to use a speech rate that was 20% faster than the normal rate. Four native Mandarin Chinese speakers and six foreign learners of Chinese read the tongue twisters aloud, repeating each one four times in a slide. The four native Mandarin Chinese speakers made a total of 606 errors, and the non-native speakers produced 3970 errors. The results show a clear difference between L1 and L2 speakers and a relation between years of learning Chinese and total number of errors.
采用绕口令范式比较了汉语母语者和非母语者绕口令错误的数量和类型。刺激包括106个四连音,其中32个由英语绕口令音译而来,26个为元音绕口令,48个为辅音绕口令。对辅音和元音的错误都进行了调查(但不包括音调),并将错误分类为由前面或后面的语言形式引起的(或两者都引起或两者都不引起)。为了提高错误率,我们要求参与者使用比正常语速快20%的语速。四名母语为普通话的中国人和六名外国汉语学习者大声朗读这些绕口令,并在一张幻灯片上把每个绕口令重复四遍。四名以普通话为母语的人总共犯了606个错误,非以普通话为母语的人犯了3970个错误。结果表明,母语和第二语言使用者之间存在明显差异,汉语学习年限与总错误数之间存在一定的关系。
{"title":"The Study of Phonological Neighborhoods in Chinese L1 and L2 Speech Production","authors":"Tongtong Xie, Haiying Ye, Hongyan Wang, J. V. D. Weijer","doi":"10.1145/3446132.3446135","DOIUrl":"https://doi.org/10.1145/3446132.3446135","url":null,"abstract":"The tongue twister paradigm was used to compare numbers and types of errors of native and non-native speakers of Chinese when producing tongue twisters. The stimuli consisted of 106 quadruples, 32 of which were transliterated from English tongue twisters, 26 of which were vocalic twisters, and 48 of which were consonant twisters. Both consonant and vowel errors were investigated (but not tone) and errors were classified as caused by either preceding or following linguistic forms (or as caused by both or neither). To enhance errors, we requested participants to use a speech rate that was 20% faster than the normal rate. Four native Mandarin Chinese speakers and six foreign learners of Chinese read the tongue twisters aloud, repeating each one four times in a slide. The four native Mandarin Chinese speakers made a total of 606 errors, and the non-native speakers produced 3970 errors. The results show a clear difference between L1 and L2 speakers and a relation between years of learning Chinese and total number of errors.","PeriodicalId":125388,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126514237","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
F3N: Full Feature Fusion Network for Object Detection F3N:用于目标检测的全特征融合网络
Gang Wang, Tang Kai, Kazushige Ouchi
This paper is mainly aimed at proposing a powerful feature fusion method for object detection. An exceptionally significant accuracy improvement is achieved by augmenting all multi-scale features by adding a finite amount of computation. Hence, we created our detector based on a fast detector on SSD [1] and called it Full Feature Fusion Network (F3N). Using several Feature Fusion modules, we fused low-level and high-level features by parallel low-high level sub-network with repeated information exchange across multi-scale features. We fused all the multi-scale features using concatenate and interpolate methods within several feature fusion modules. F3N achieves the new state of the art result for one-stage object detection. F3N with 512x512 input achieves 82.5% mAP (mean Average Precision) and 320x320 input yields 80.3% on the VOC2007 test, with 512x512 input achieving 81.1% and 320x320 input yielding 77.3% on the VOC2012 test. In MS COCO data set, 512x512 input obtains 33.9% and 320x320 input yields 30.4%. The accuracies are significantly enhanced compared to the current mainstream approaches such as SSD [1], DSSD [8], FPN [11], YOLO [6].
本文的主要目的是提出一种功能强大的目标检测特征融合方法。通过增加有限的计算量来增加所有的多尺度特征,实现了非常显著的精度提高。因此,我们基于SSD上的快速检测器创建了检测器[1],并将其称为Full Feature Fusion Network (F3N)。利用多个特征融合模块,通过多尺度特征间的重复信息交换,实现低、高层特征的融合。我们在几个特征融合模块中使用连接和插值方法融合了所有的多尺度特征。F3N实现了单阶段目标检测的最新技术成果。在VOC2007测试中,512x512输入的F3N达到82.5% mAP(平均精度),320x320输入的产量为80.3%,而在VOC2012测试中,512x512输入的产量为81.1%,320x320输入的产量为77.3%。在MS COCO数据集中,512x512输入获得33.9%,320x320输入获得30.4%。与当前主流的SSD[1]、DSSD[8]、FPN[11]、YOLO[6]等方法相比,精度得到了显著提高。
{"title":"F3N: Full Feature Fusion Network for Object Detection","authors":"Gang Wang, Tang Kai, Kazushige Ouchi","doi":"10.1145/3446132.3446152","DOIUrl":"https://doi.org/10.1145/3446132.3446152","url":null,"abstract":"This paper is mainly aimed at proposing a powerful feature fusion method for object detection. An exceptionally significant accuracy improvement is achieved by augmenting all multi-scale features by adding a finite amount of computation. Hence, we created our detector based on a fast detector on SSD [1] and called it Full Feature Fusion Network (F3N). Using several Feature Fusion modules, we fused low-level and high-level features by parallel low-high level sub-network with repeated information exchange across multi-scale features. We fused all the multi-scale features using concatenate and interpolate methods within several feature fusion modules. F3N achieves the new state of the art result for one-stage object detection. F3N with 512x512 input achieves 82.5% mAP (mean Average Precision) and 320x320 input yields 80.3% on the VOC2007 test, with 512x512 input achieving 81.1% and 320x320 input yielding 77.3% on the VOC2012 test. In MS COCO data set, 512x512 input obtains 33.9% and 320x320 input yields 30.4%. The accuracies are significantly enhanced compared to the current mainstream approaches such as SSD [1], DSSD [8], FPN [11], YOLO [6].","PeriodicalId":125388,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126525930","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Grouping news events using semantic representations of hierarchical elements of articles and named entities 使用文章和命名实体的分层元素的语义表示对新闻事件进行分组
Abhishek Desai, Prateek Nagwanshi
Enormous amount of news articles are being generated through different news agencies. The variation in journalistic content and online availability of news content, makes it difficult to monitor and interpret in real time. Organizing news articles would play a crucial role in its consumption and interpretation. Our work assists end user by grouping news articles based on the story. We present here a novel approach of grouping news articles based on a multi-level embedding representation of articles, coupled with a standard TF-IDF score based on named entities. Our results shows that combining the syntactic(TF-IDF) as well as the semantic (Bert) representations can boost the performance of the news grouping task. We also experiment with transfer learning and fine tuning of state-of-the-art BERT models for the task of document similarity and use the output embeddings as document representations.
大量的新闻文章正在通过不同的新闻机构产生。新闻内容的变化和新闻内容的在线可用性使得实时监控和解释变得困难。组织新闻文章将在其消费和解释中发挥关键作用。我们的工作通过根据故事对新闻文章进行分组来帮助最终用户。我们在这里提出了一种基于文章的多层次嵌入表示和基于命名实体的标准TF-IDF分数对新闻文章进行分组的新方法。我们的研究结果表明,结合句法(TF-IDF)和语义(Bert)表示可以提高新闻分组任务的性能。我们还实验了迁移学习和最先进的BERT模型的微调,以完成文档相似度的任务,并使用输出嵌入作为文档表示。
{"title":"Grouping news events using semantic representations of hierarchical elements of articles and named entities","authors":"Abhishek Desai, Prateek Nagwanshi","doi":"10.1145/3446132.3446399","DOIUrl":"https://doi.org/10.1145/3446132.3446399","url":null,"abstract":"Enormous amount of news articles are being generated through different news agencies. The variation in journalistic content and online availability of news content, makes it difficult to monitor and interpret in real time. Organizing news articles would play a crucial role in its consumption and interpretation. Our work assists end user by grouping news articles based on the story. We present here a novel approach of grouping news articles based on a multi-level embedding representation of articles, coupled with a standard TF-IDF score based on named entities. Our results shows that combining the syntactic(TF-IDF) as well as the semantic (Bert) representations can boost the performance of the news grouping task. We also experiment with transfer learning and fine tuning of state-of-the-art BERT models for the task of document similarity and use the output embeddings as document representations.","PeriodicalId":125388,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131363726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Exploration of a Balanced Reference Corpus with a Wide Variety of Text Mining Tools 使用多种文本挖掘工具探索平衡参考语料库
Nicolas Turenne, Bokai Xu, Xinyue Li, Xindi Xu, Hongyu Liu, Xiaolin Zhu
To compare various techniques, the same platform is generally used into which the user will import a text dataset. Another approach uses an evaluation based on a gold standard for a specific task, but a balanced common language corpus is not often used. We choose the Corpus of Contemporary American English Corpus (COCA) as a balanced reference corpus, and split this corpus into categories, such as topics and genres, to apply families of feature extraction and machine learning algorithms. We found that the Stanford CoreNLP method was faster and more accurate than the NLTK method, and was more reliable and easier to understand. The results of clustering show that a higher modularity influences interpretation. For genre and topic classification, all techniques achieved a relatively high score, though these were below the state-of-the-art scores from challenge text datasets. Naïve Bayes outperformed the other alternatives. We hope that balanced corpora from a variety of different vernacular (or low-resource) languages can be used as references to determine the efficiency of the wide diversity of state-of-the-art text mining tools.
为了比较各种技术,通常使用相同的平台,用户将导入文本数据集。另一种方法使用基于特定任务的黄金标准的评估,但不经常使用平衡的公共语言语料库。我们选择当代美国英语语料库(COCA)作为平衡的参考语料库,并将该语料库划分为主题和类型等类别,以应用特征提取和机器学习算法家族。我们发现Stanford CoreNLP方法比NLTK方法更快、更准确,并且更可靠、更容易理解。聚类结果表明,较高的模块化影响解释。对于体裁和主题分类,所有技术都获得了相对较高的分数,尽管这些分数低于挑战文本数据集的最先进分数。Naïve贝叶斯优于其他选择。我们希望来自各种不同方言(或低资源)语言的平衡语料库可以作为参考,以确定各种最先进的文本挖掘工具的效率。
{"title":"Exploration of a Balanced Reference Corpus with a Wide Variety of Text Mining Tools","authors":"Nicolas Turenne, Bokai Xu, Xinyue Li, Xindi Xu, Hongyu Liu, Xiaolin Zhu","doi":"10.1145/3446132.3446192","DOIUrl":"https://doi.org/10.1145/3446132.3446192","url":null,"abstract":"To compare various techniques, the same platform is generally used into which the user will import a text dataset. Another approach uses an evaluation based on a gold standard for a specific task, but a balanced common language corpus is not often used. We choose the Corpus of Contemporary American English Corpus (COCA) as a balanced reference corpus, and split this corpus into categories, such as topics and genres, to apply families of feature extraction and machine learning algorithms. We found that the Stanford CoreNLP method was faster and more accurate than the NLTK method, and was more reliable and easier to understand. The results of clustering show that a higher modularity influences interpretation. For genre and topic classification, all techniques achieved a relatively high score, though these were below the state-of-the-art scores from challenge text datasets. Naïve Bayes outperformed the other alternatives. We hope that balanced corpora from a variety of different vernacular (or low-resource) languages can be used as references to determine the efficiency of the wide diversity of state-of-the-art text mining tools.","PeriodicalId":125388,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131947807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging Different Context for Response Generation through Topic-guided Multi-head Attention 通过主题导向的多头注意,利用不同的上下文来产生反应
Weikang Zhang, Zhanzhe Li, Yupu Guo
Multi-turn dialogue system plays an important role in intelligent interaction. In particular, the subtask response generation in a multi- turn conversation system is a challenging task, which aims to generate more diverse and contextually relevant responses. Most of the methods focus on the sequential connection between sentence levels by using hierarchical framework and attention mechanism, but lack reflection from the overall semantic level such as topical information. Previous work would lead to a lack of full understanding of the dialogue history. In this paper, we propose a context-augmented model, named TGMA-RG, which leverages the conversational context to promote interactivity and persistence of multi-turn dialogues through topic-guided multi-head attention mechanism. Especially, we extract the topics from conversational context and design a hierarchical encoder-decoder models with a multi-head attention mechanism. Among them, we utilize topics vectors as queries of attention mechanism to obtain the corresponding weights between each utterance and each topic. Our experimental results on two publicly available datasets show that TGMA-RG improves the performance than other baselines in terms of BLEU-1, BLEU-2, Distinct-1, Distinct-2 and PPL.
多回合对话系统在智能交互中起着重要的作用。特别是在多回合会话系统中,子任务响应生成是一项具有挑战性的任务,其目的是生成更加多样化和上下文相关的响应。大多数方法都是利用层次框架和注意机制来关注句子层次之间的顺序联系,但缺乏从主题信息等整体语义层面的反映。以往的工作将导致对对话历史缺乏充分的了解。在本文中,我们提出了一个语境增强模型TGMA-RG,该模型通过话题导向的多头注意机制,利用会话语境来促进多回合对话的交互性和持久性。特别地,我们从会话上下文中提取主题,并设计了具有多头注意机制的分层编码器-解码器模型。其中,我们利用主题向量作为注意机制的查询,获得每个话语与每个主题之间的对应权值。我们在两个公开数据集上的实验结果表明,TGMA-RG在BLEU-1、BLEU-2、Distinct-1、Distinct-2和PPL方面的性能优于其他基线。
{"title":"Leveraging Different Context for Response Generation through Topic-guided Multi-head Attention","authors":"Weikang Zhang, Zhanzhe Li, Yupu Guo","doi":"10.1145/3446132.3446168","DOIUrl":"https://doi.org/10.1145/3446132.3446168","url":null,"abstract":"Multi-turn dialogue system plays an important role in intelligent interaction. In particular, the subtask response generation in a multi- turn conversation system is a challenging task, which aims to generate more diverse and contextually relevant responses. Most of the methods focus on the sequential connection between sentence levels by using hierarchical framework and attention mechanism, but lack reflection from the overall semantic level such as topical information. Previous work would lead to a lack of full understanding of the dialogue history. In this paper, we propose a context-augmented model, named TGMA-RG, which leverages the conversational context to promote interactivity and persistence of multi-turn dialogues through topic-guided multi-head attention mechanism. Especially, we extract the topics from conversational context and design a hierarchical encoder-decoder models with a multi-head attention mechanism. Among them, we utilize topics vectors as queries of attention mechanism to obtain the corresponding weights between each utterance and each topic. Our experimental results on two publicly available datasets show that TGMA-RG improves the performance than other baselines in terms of BLEU-1, BLEU-2, Distinct-1, Distinct-2 and PPL.","PeriodicalId":125388,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","volume":"189 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114201513","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
A Semantic Demand-Service Matching Method based on OWL-S for Cloud Testing Service Platform 基于OWL-S的云测试服务平台语义需求-服务匹配方法
Qing Xia, Chun-Xu Jiang, Chuan Yang, Hao Huang
Traditional testing service incurs high cost and low efficiency because of the expenditure on testing tools and the geographical location. Cloud testing service platform (CTSP) uses cloud infrastructure for testing service, which leads to a more cost-effective testing solution. However, how to realize the intelligent matching among the various testing services and the testing demands is one of the common issues and aims for CTSP. This paper investigates a semantic demand-service matching method for CTSP. Considering the diverse, heterogeneous and dynamic characteristics of cloud testing services, an Input, Output, Precondition, Effect (IOPE) matching model based on Web Ontology Language for Service (OWL-S) is proposed, and a three-phase matching process is developed consisting of parameter matching, attribute matching and global matching. To compute the matching degree between a testing service and a testing demand during the matching process, a quantitative matching method is put forward. At last, the effectiveness and feasibility of the proposed method is tested by a case study.
传统的检测服务由于检测工具的费用和地理位置的限制,成本高,效率低。云测试服务平台(CTSP)使用云基础设施进行测试服务,从而提供更具成本效益的测试解决方案。然而,如何实现各种测试服务与测试需求之间的智能匹配是CTSP面临的共同问题和目标之一。研究了一种面向CTSP的语义需求服务匹配方法。针对云测试服务的多样性、异构性和动态性特点,提出了一种基于Web Ontology Language for Service (OWL-S)的输入、输出、前提、效果(IOPE)匹配模型,并构建了参数匹配、属性匹配和全局匹配的三阶段匹配流程。为了在匹配过程中计算测试服务与测试需求之间的匹配程度,提出了一种定量匹配方法。最后,通过实例验证了该方法的有效性和可行性。
{"title":"A Semantic Demand-Service Matching Method based on OWL-S for Cloud Testing Service Platform","authors":"Qing Xia, Chun-Xu Jiang, Chuan Yang, Hao Huang","doi":"10.1145/3446132.3446136","DOIUrl":"https://doi.org/10.1145/3446132.3446136","url":null,"abstract":"Traditional testing service incurs high cost and low efficiency because of the expenditure on testing tools and the geographical location. Cloud testing service platform (CTSP) uses cloud infrastructure for testing service, which leads to a more cost-effective testing solution. However, how to realize the intelligent matching among the various testing services and the testing demands is one of the common issues and aims for CTSP. This paper investigates a semantic demand-service matching method for CTSP. Considering the diverse, heterogeneous and dynamic characteristics of cloud testing services, an Input, Output, Precondition, Effect (IOPE) matching model based on Web Ontology Language for Service (OWL-S) is proposed, and a three-phase matching process is developed consisting of parameter matching, attribute matching and global matching. To compute the matching degree between a testing service and a testing demand during the matching process, a quantitative matching method is put forward. At last, the effectiveness and feasibility of the proposed method is tested by a case study.","PeriodicalId":125388,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","volume":"54 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127894963","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MTDNNF: Building the Security Framework for Deep Neural Network by Moving Target Defense* MTDNNF:基于移动目标防御的深度神经网络安全框架构建*
Weiwei Wang, Xinli Xiong, Songhe Wang, Jingye Zhang
With the development of deep neural networks in pattern classification for recognizing handwritten digits on cheques, object classification for the automated surveillance, and autonomous vehicles, the problem of DNNs confront malicious inputs has been a hot topic. In this paper, we introduced a security-enhanced framework for DNNs to conduct classification based on moving target defense (MTDNNF). Also, we presented three pivotal characteristics to realize the framework, heterogeneity, selectivity, and adaptability, which enabled MTDNNF and guaranteed security and veracity. Also, we analyzed the security and performance of MTDNNF. Those analyses show that the MTDNNF can provide significant security improvements against malicious inputs, and extra cost in performance is inessential under both massive and minimum scenarios.
随着深度神经网络在支票手写数字识别模式分类、自动监控目标分类、自动驾驶汽车等领域的发展,深度神经网络面对恶意输入的问题已成为研究热点。本文引入了一种基于移动目标防御(MTDNNF)的安全增强dnn分类框架。提出了实现MTDNNF框架的三个关键特性:异构性、选择性和适应性,使MTDNNF得以实现,保证了MTDNNF的安全性和准确性。并对MTDNNF的安全性和性能进行了分析。这些分析表明,MTDNNF可以提供针对恶意输入的显著安全性改进,并且在大规模和最小场景下,额外的性能成本都是不必要的。
{"title":"MTDNNF: Building the Security Framework for Deep Neural Network by Moving Target Defense*","authors":"Weiwei Wang, Xinli Xiong, Songhe Wang, Jingye Zhang","doi":"10.1145/3446132.3446178","DOIUrl":"https://doi.org/10.1145/3446132.3446178","url":null,"abstract":"With the development of deep neural networks in pattern classification for recognizing handwritten digits on cheques, object classification for the automated surveillance, and autonomous vehicles, the problem of DNNs confront malicious inputs has been a hot topic. In this paper, we introduced a security-enhanced framework for DNNs to conduct classification based on moving target defense (MTDNNF). Also, we presented three pivotal characteristics to realize the framework, heterogeneity, selectivity, and adaptability, which enabled MTDNNF and guaranteed security and veracity. Also, we analyzed the security and performance of MTDNNF. Those analyses show that the MTDNNF can provide significant security improvements against malicious inputs, and extra cost in performance is inessential under both massive and minimum scenarios.","PeriodicalId":125388,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127007974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Customer classification based on spatial transition probability and Deep Forest 基于空间转移概率和深度森林的客户分类
Yanbing Liu, Xiang Shi, Feijie Huang, Senyou Yang, Qiqi Fan, B. Zhu
Accurate customer classification can help company save costs and create profits more effectively. In previous studies, few research uses spatio-temporal data for customer classification. In this paper, we put forward a hybrid classification method named MDF based on transition probability matrix andDeep Forest in order to improve the performance in customer classification. The innovation of the proposed new method is that it converts spatio-temporal data to construct the transition probability matrix and then it adopts Deep Forest to classify customers into different types. Experiments on real-world customer classification task from retail company have been done and we have compared MDF with some benchmark methods. Experimental results shows that the proposed method MDF have better performance than other techniques. The new customer classification method provides useful a tool for customer relationship management.
准确的客户分类可以帮助企业更有效地节约成本,创造利润。在以往的研究中,很少有研究使用时空数据进行顾客分类。为了提高客户分类的性能,本文提出了一种基于转移概率矩阵和深度森林的混合分类方法MDF。该方法的创新之处在于将时空数据转换为转移概率矩阵,然后利用Deep Forest对客户进行分类。在实际零售企业客户分类任务中进行了实验,并与一些基准方法进行了比较。实验结果表明,该方法具有较好的性能。这种新的客户分类方法为客户关系管理提供了一种有用的工具。
{"title":"Customer classification based on spatial transition probability and Deep Forest","authors":"Yanbing Liu, Xiang Shi, Feijie Huang, Senyou Yang, Qiqi Fan, B. Zhu","doi":"10.1145/3446132.3446171","DOIUrl":"https://doi.org/10.1145/3446132.3446171","url":null,"abstract":"Accurate customer classification can help company save costs and create profits more effectively. In previous studies, few research uses spatio-temporal data for customer classification. In this paper, we put forward a hybrid classification method named MDF based on transition probability matrix andDeep Forest in order to improve the performance in customer classification. The innovation of the proposed new method is that it converts spatio-temporal data to construct the transition probability matrix and then it adopts Deep Forest to classify customers into different types. Experiments on real-world customer classification task from retail company have been done and we have compared MDF with some benchmark methods. Experimental results shows that the proposed method MDF have better performance than other techniques. The new customer classification method provides useful a tool for customer relationship management.","PeriodicalId":125388,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","volume":"90 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116387786","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Longitudinal collision warning system based on driver braking characteristics 基于驾驶员制动特性的纵向碰撞预警系统
Zhifeng Han, Xu Li, Jianchun Wang
A hierarchical braking control strategy based on driver braking acceleration analysis is proposed in this paper. Firstly the vehicle state data of the driver during braking collected by driving simulator. Then by analyzing the acceleration of the vehicle during braking, the driver's desired acceleration during collision avoidance is determined, and the TTC threshold is divided according to this desired acceleration value. The two-level warning and two-level braking collision avoidance strategies are designed based on the second-order collision time model. Finally, the overall simulation model of the collision warning system is constructed in Simulink/ Carsim. The co-simulation test results demonstrate that the hierarchical braking and warning strategy of this system can effectively avoid crash.
提出了一种基于驾驶员制动加速度分析的分层制动控制策略。首先利用驾驶模拟器采集驾驶员制动时的车辆状态数据。然后通过分析车辆在制动时的加速度,确定驾驶员在避碰时的期望加速度,并根据该期望加速度值划分TTC阈值。基于二阶碰撞时间模型,设计了两级预警和两级制动避碰策略。最后,在Simulink/ Carsim中构建了碰撞预警系统的整体仿真模型。联合仿真试验结果表明,该系统的分层制动预警策略能够有效地避免碰撞。
{"title":"Longitudinal collision warning system based on driver braking characteristics","authors":"Zhifeng Han, Xu Li, Jianchun Wang","doi":"10.1145/3446132.3446141","DOIUrl":"https://doi.org/10.1145/3446132.3446141","url":null,"abstract":"A hierarchical braking control strategy based on driver braking acceleration analysis is proposed in this paper. Firstly the vehicle state data of the driver during braking collected by driving simulator. Then by analyzing the acceleration of the vehicle during braking, the driver's desired acceleration during collision avoidance is determined, and the TTC threshold is divided according to this desired acceleration value. The two-level warning and two-level braking collision avoidance strategies are designed based on the second-order collision time model. Finally, the overall simulation model of the collision warning system is constructed in Simulink/ Carsim. The co-simulation test results demonstrate that the hierarchical braking and warning strategy of this system can effectively avoid crash.","PeriodicalId":125388,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116793250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Deep Learning on Superpoint Generation with Iterative Clustering Network 利用迭代聚类网络对超级点生成进行深度学习
Jianlong Yuan, Jin Xie
In 3D point clouds, superpoint is a set of points that share common characteristics. Semantically pure superpoints can greatly reduce the number of points while ensuring that the points located in the same superpoint have common semantic information. In this paper, we propose an end-to-end method for generating semantically pure superpoints. Specifically, we first use a light PointNet-liked network to embed low-dimensional point clouds into feature space to obtain semantic information. Next, we use farthest point sampling (FPS) to sample K points as the initial cluster centers. For each center, we cluster the points by jointly considering spatial and feature space. After clustering, we update the feature of each cluster center by simply averaging the point feature in the same cluster. By iteratively clustering and updating the feature of clusters, we obtain coarse superpoints, which contain a few points incorrectly clustered. Finally, to eliminate incorrectly clustered points, we leverage the breadth-first-search (BFS) to find and fuse them to obtain fine superpoints, leading to improvement on semantically pure superpoints. Extensive experiments conducted on S3DIS and ScanNet demonstrate the effectiveness of the proposed method. Furthermore, we achieve the state-of-the-art on both two datasets.
在三维点云中,超级点是一组具有共同特征的点。语义纯粹的超级点可以大大减少点的数量,同时确保位于同一超级点的点具有共同的语义信息。在本文中,我们提出了一种生成语义纯粹的超级点的端到端方法。具体来说,我们首先使用轻型点网(PointNet-liked)网络将低维点云嵌入特征空间,以获取语义信息。接下来,我们使用最远点采样(FPS)对 K 个点进行采样,作为初始聚类中心。对于每个中心,我们通过联合考虑空间和特征空间对点进行聚类。聚类后,我们通过简单地平均同一聚类中的点特征来更新每个聚类中心的特征。通过迭代聚类和更新聚类特征,我们会得到粗略的超级点,其中包含一些聚类错误的点。最后,为了消除错误聚类的点,我们利用广度优先搜索(BFS)来查找并融合这些点,从而获得精细超级点,从而改进语义纯粹的超级点。在 S3DIS 和 ScanNet 上进行的大量实验证明了所提方法的有效性。此外,我们在这两个数据集上都达到了最先进的水平。
{"title":"Deep Learning on Superpoint Generation with Iterative Clustering Network","authors":"Jianlong Yuan, Jin Xie","doi":"10.1145/3446132.3446139","DOIUrl":"https://doi.org/10.1145/3446132.3446139","url":null,"abstract":"In 3D point clouds, superpoint is a set of points that share common characteristics. Semantically pure superpoints can greatly reduce the number of points while ensuring that the points located in the same superpoint have common semantic information. In this paper, we propose an end-to-end method for generating semantically pure superpoints. Specifically, we first use a light PointNet-liked network to embed low-dimensional point clouds into feature space to obtain semantic information. Next, we use farthest point sampling (FPS) to sample K points as the initial cluster centers. For each center, we cluster the points by jointly considering spatial and feature space. After clustering, we update the feature of each cluster center by simply averaging the point feature in the same cluster. By iteratively clustering and updating the feature of clusters, we obtain coarse superpoints, which contain a few points incorrectly clustered. Finally, to eliminate incorrectly clustered points, we leverage the breadth-first-search (BFS) to find and fuse them to obtain fine superpoints, leading to improvement on semantically pure superpoints. Extensive experiments conducted on S3DIS and ScanNet demonstrate the effectiveness of the proposed method. Furthermore, we achieve the state-of-the-art on both two datasets.","PeriodicalId":125388,"journal":{"name":"Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129417447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2020 3rd International Conference on Algorithms, Computing and Artificial Intelligence
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1