首页 > 最新文献

Frontiers Comput. Sci.最新文献

英文 中文
An efficient deep learning-assisted person re-identification solution for intelligent video surveillance in smart cities 面向智慧城市智能视频监控的高效深度学习辅助人员再识别解决方案
Pub Date : 2022-12-12 DOI: 10.1007/s11704-022-2050-4
M. Maqsood, Sadaf Yasmin, S. Gillani, Maryam Bukhari, Seung-Ryong Rho, Sang-Soo Yeo
{"title":"An efficient deep learning-assisted person re-identification solution for intelligent video surveillance in smart cities","authors":"M. Maqsood, Sadaf Yasmin, S. Gillani, Maryam Bukhari, Seung-Ryong Rho, Sang-Soo Yeo","doi":"10.1007/s11704-022-2050-4","DOIUrl":"https://doi.org/10.1007/s11704-022-2050-4","url":null,"abstract":"","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115140174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
An improved master-apprentice evolutionary algorithm for minimum independent dominating set problem 最小独立支配集问题的改进师徒进化算法
Pub Date : 2022-12-12 DOI: 10.1007/s11704-022-2023-7
Shiwei Pan, Yiming Ma, Yiyuan Wang, Zhihui Zhou, Jinchao Ji, Minghao Yin, Shuli Hu
{"title":"An improved master-apprentice evolutionary algorithm for minimum independent dominating set problem","authors":"Shiwei Pan, Yiming Ma, Yiyuan Wang, Zhihui Zhou, Jinchao Ji, Minghao Yin, Shuli Hu","doi":"10.1007/s11704-022-2023-7","DOIUrl":"https://doi.org/10.1007/s11704-022-2023-7","url":null,"abstract":"","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129484522","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
MAML2: meta reinforcement learning via meta-learning for task categories MAML2:通过任务类别的元学习进行元强化学习
Pub Date : 2022-12-12 DOI: 10.1007/s11704-022-2037-1
Qiming Fu, Zhechao Wang, Nengwei Fang, Bin Xing, Xiao Zhang, Jianping Chen
{"title":"MAML2: meta reinforcement learning via meta-learning for task categories","authors":"Qiming Fu, Zhechao Wang, Nengwei Fang, Bin Xing, Xiao Zhang, Jianping Chen","doi":"10.1007/s11704-022-2037-1","DOIUrl":"https://doi.org/10.1007/s11704-022-2037-1","url":null,"abstract":"","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126906096","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Preserving conceptual model semantics in the forward engineering of relational schemas 在关系模式的前向工程中保留概念模型语义
Pub Date : 2022-12-08 DOI: 10.3389/fcomp.2022.1020168
G. Guidoni, João Paulo A. Almeida, G. Guizzardi
Forward engineering relational schemas based on conceptual models (in languages such as UML and ER) is an established practice, with several automated transformation approaches discussed in the literature and implemented in production tools. These transformations must bridge the gap between the primitives offered by conceptual modeling languages on the one hand and the relational model on the other. As a result, it is often the case that some of the semantics of the source conceptual model is lost in the transformation process. In this paper, we address this problem by forward engineering additional constraints along with the transformed schema (ultimately implemented as triggers). We formulate our approach in terms of the operations of “flattening” and “lifting” of classes to make our approach largely independent of the particular transformation strategy (one table per hierarchy, one table per class, one table per concrete class, one table per leaf class, etc.). An automated transformation tool is provided that traces the cumulative consequences of the operations as they are applied throughout the transformation process. We report on tests of this tool using models published in an open model repository.
基于概念模型(使用UML和ER等语言)的前向工程关系模式是一种已建立的实践,在文献中讨论了几种自动化转换方法,并在生产工具中实现。这些转换必须弥合概念建模语言提供的原语与关系模型之间的鸿沟。因此,经常会出现源概念模型的一些语义在转换过程中丢失的情况。在本文中,我们通过前向工程附加约束以及转换的模式(最终作为触发器实现)来解决这个问题。我们根据类的“扁平化”和“提升”操作来制定我们的方法,以使我们的方法在很大程度上独立于特定的转换策略(每个层次结构一个表,每个类一个表,每个具体类一个表,每个叶子类一个表,等等)。提供了一个自动化的转换工具,当它们在整个转换过程中应用时,可以跟踪操作的累积结果。我们使用在开放模型存储库中发布的模型报告该工具的测试。
{"title":"Preserving conceptual model semantics in the forward engineering of relational schemas","authors":"G. Guidoni, João Paulo A. Almeida, G. Guizzardi","doi":"10.3389/fcomp.2022.1020168","DOIUrl":"https://doi.org/10.3389/fcomp.2022.1020168","url":null,"abstract":"Forward engineering relational schemas based on conceptual models (in languages such as UML and ER) is an established practice, with several automated transformation approaches discussed in the literature and implemented in production tools. These transformations must bridge the gap between the primitives offered by conceptual modeling languages on the one hand and the relational model on the other. As a result, it is often the case that some of the semantics of the source conceptual model is lost in the transformation process. In this paper, we address this problem by forward engineering additional constraints along with the transformed schema (ultimately implemented as triggers). We formulate our approach in terms of the operations of “flattening” and “lifting” of classes to make our approach largely independent of the particular transformation strategy (one table per hierarchy, one table per class, one table per concrete class, one table per leaf class, etc.). An automated transformation tool is provided that traces the cumulative consequences of the operations as they are applied throughout the transformation process. We report on tests of this tool using models published in an open model repository.","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134337935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Real-Time Music Following in Score Sheet Images via Multi-Resolution Prediction 通过多分辨率预测在乐谱图像中的实时音乐跟随
Pub Date : 2021-11-24 DOI: 10.3389/fcomp.2021.718340
Florian Henkel, G. Widmer
The task of real-time alignment between a music performance and the corresponding score (sheet music), also known as score following, poses a challenging multi-modal machine learning problem. Training a system that can solve this task robustly with live audio and real sheet music (i.e., scans or score images) requires precise ground truth alignments between audio and note-coordinate positions in the score sheet images. However, these kinds of annotations are difficult and costly to obtain, which is why research in this area mainly utilizes synthetic audio and sheet images to train and evaluate score following systems. In this work, we propose a method that does not solely rely on note alignments but is additionally capable of leveraging data with annotations of lower granularity, such as bar or score system alignments. This allows us to use a large collection of real-world piano performance recordings coarsely aligned to scanned score sheet images and, as a consequence, improve over current state-of-the-art approaches.
在音乐表演和相应的乐谱(乐谱)之间进行实时对齐的任务,也称为乐谱跟踪,提出了一个具有挑战性的多模式机器学习问题。训练一个可以通过现场音频和真实乐谱(即扫描或乐谱图像)健壮地解决此任务的系统需要在乐谱图像中的音频和音符坐标位置之间进行精确的地面真实对齐。然而,这些类型的注释很难获得且成本高,这就是为什么该领域的研究主要利用合成音频和薄片图像来训练和评估分数跟踪系统。在这项工作中,我们提出了一种方法,它不仅依赖于音符对齐,而且还能够利用具有较低粒度注释的数据,例如条形或分数系统对齐。这使我们能够使用大量真实世界的钢琴演奏录音,粗略地与扫描的乐谱图像对齐,因此,改进了目前最先进的方法。
{"title":"Real-Time Music Following in Score Sheet Images via Multi-Resolution Prediction","authors":"Florian Henkel, G. Widmer","doi":"10.3389/fcomp.2021.718340","DOIUrl":"https://doi.org/10.3389/fcomp.2021.718340","url":null,"abstract":"The task of real-time alignment between a music performance and the corresponding score (sheet music), also known as score following, poses a challenging multi-modal machine learning problem. Training a system that can solve this task robustly with live audio and real sheet music (i.e., scans or score images) requires precise ground truth alignments between audio and note-coordinate positions in the score sheet images. However, these kinds of annotations are difficult and costly to obtain, which is why research in this area mainly utilizes synthetic audio and sheet images to train and evaluate score following systems. In this work, we propose a method that does not solely rely on note alignments but is additionally capable of leveraging data with annotations of lower granularity, such as bar or score system alignments. This allows us to use a large collection of real-world piano performance recordings coarsely aligned to scanned score sheet images and, as a consequence, improve over current state-of-the-art approaches.","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126186330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Generative Adversarial Networks for Augmenting Training Data of Microscopic Cell Images 增强微观细胞图像训练数据的生成对抗网络
Pub Date : 2019-11-26 DOI: 10.3389/fcomp.2019.00010
P. Baniukiewicz, E. Lutton, Sharon Collier, T. Bretschneider
Generative adversarial networks (GANs) have recently been successfully used to create realistic synthetic microscopy cell images in 2D and predict intermediate cell stages. In the current paper we highlight that GANs can not only be used for creating synthetic cell images optimized for different fluorescent molecular labels, but that by using GANs for augmentation of training data involving scaling or other transformations the inherent length scale of biological structures is retained. In addition, GANs make it possible to create synthetic cells with specific shape features, which can be used, for example, to validate different methods for feature extraction. Here, we apply GANs to create 2D distributions of fluorescent markers for F-actin in the cell cortex of Dictyostelium cells (ABD), a membrane receptor (cAR1), and a cortex-membrane linker protein (TalA). The recent more widespread use of 3D lightsheet microscopy, where obtaining sufficient training data is considerably more difficult than in 2D, creates significant demand for novel approaches to data augmentation. We show that it is possible to directly generate synthetic 3D cell images using GANs, but limitations are excessive training times, dependence on high-quality segmentations of 3D images, and that the number of z-slices cannot be freely adjusted without retraining the network. We demonstrate that in the case of molecular labels that are highly correlated with cell shape, like F-actin in our example, 2D GANs can be used efficiently to create pseudo-3D synthetic cell data from individually generated 2D slices. Because high quality segmented 2D cell data are more readily available, this is an attractive alternative to using less efficient 3D networks.
生成对抗网络(GANs)最近已成功地用于在2D中创建逼真的合成显微镜细胞图像并预测中间细胞阶段。在本文中,我们强调gan不仅可以用于创建针对不同荧光分子标记优化的合成细胞图像,而且通过使用gan来增强涉及缩放或其他转换的训练数据,可以保留生物结构固有的长度尺度。此外,gan可以创建具有特定形状特征的合成细胞,例如,可以用于验证不同的特征提取方法。在这里,我们应用gan在盘形骨细胞皮层(ABD)、膜受体(cAR1)和皮质-膜连接蛋白(TalA)的细胞皮层中创建f -肌动蛋白荧光标记的二维分布。最近更广泛使用的3D光片显微镜,其中获得足够的训练数据是相当困难的比在2D,创造了新的数据增强方法的显著需求。我们表明,使用gan直接生成合成3D细胞图像是可能的,但限制是训练时间过多,依赖于3D图像的高质量分割,并且如果不重新训练网络,z切片的数量无法自由调整。我们证明,在分子标记与细胞形状高度相关的情况下,如我们的例子中的f -肌动蛋白,2D gan可以有效地用于从单独生成的2D切片中创建伪3d合成细胞数据。由于高质量的分割2D单元数据更容易获得,这是使用效率较低的3D网络的一个有吸引力的替代方案。
{"title":"Generative Adversarial Networks for Augmenting Training Data of Microscopic Cell Images","authors":"P. Baniukiewicz, E. Lutton, Sharon Collier, T. Bretschneider","doi":"10.3389/fcomp.2019.00010","DOIUrl":"https://doi.org/10.3389/fcomp.2019.00010","url":null,"abstract":"Generative adversarial networks (GANs) have recently been successfully used to create realistic synthetic microscopy cell images in 2D and predict intermediate cell stages. In the current paper we highlight that GANs can not only be used for creating synthetic cell images optimized for different fluorescent molecular labels, but that by using GANs for augmentation of training data involving scaling or other transformations the inherent length scale of biological structures is retained. In addition, GANs make it possible to create synthetic cells with specific shape features, which can be used, for example, to validate different methods for feature extraction. Here, we apply GANs to create 2D distributions of fluorescent markers for F-actin in the cell cortex of Dictyostelium cells (ABD), a membrane receptor (cAR1), and a cortex-membrane linker protein (TalA). The recent more widespread use of 3D lightsheet microscopy, where obtaining sufficient training data is considerably more difficult than in 2D, creates significant demand for novel approaches to data augmentation. We show that it is possible to directly generate synthetic 3D cell images using GANs, but limitations are excessive training times, dependence on high-quality segmentations of 3D images, and that the number of z-slices cannot be freely adjusted without retraining the network. We demonstrate that in the case of molecular labels that are highly correlated with cell shape, like F-actin in our example, 2D GANs can be used efficiently to create pseudo-3D synthetic cell data from individually generated 2D slices. Because high quality segmented 2D cell data are more readily available, this is an attractive alternative to using less efficient 3D networks.","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"117 48","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120826019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 24
A Comparative Analysis of Student Performance in an Online vs. Face-to-Face Environmental Science Course From 2009 to 2016 2009年至2016年在线与面对面环境科学课程学生表现的比较分析
Pub Date : 2019-11-12 DOI: 10.3389/fcomp.2019.00007
J. Paul, F. Jefferson
A growing number of students are now opting for online classes. They find the traditional classroom modality restrictive, inflexible, and impractical. In this age of technological advancement, schools can now provide effective classroom teaching via the Web. This shift in pedagogical medium is forcing academic institutions to rethink how they want to deliver their course content. The overarching purpose of this research was to determine which teaching method proved more effective over the eight-year period. The scores of 548 students, 401 traditional students and 147 online students, in an environmental science class were used to determine which instructional modality generated better student performance. In addition to the overarching objective, we also examined score variabilities between genders and classifications to determine if teaching modality had a greater impact on specific groups. No significant difference in student performance between online and face-to-face (F2F) learners overall, with respect to gender, or with respect to class rank were found. These data demonstrate the ability to similarly translate environmental science concepts for non-STEM majors in both traditional and online platforms irrespective of gender or class rank. A potential exists for increasing the number of non-STEM majors engaged in citizen science using the flexibility of online learning to teach environmental science core concepts.
现在越来越多的学生选择在线课程。他们发现传统的课堂模式是限制性的、不灵活的、不切实际的。在这个科技进步的时代,学校现在可以通过网络提供有效的课堂教学。教学媒介的这种转变正迫使学术机构重新思考他们希望如何传递课程内容。这项研究的主要目的是确定哪种教学方法在八年的时间里被证明更有效。采用548名学生、401名传统学生和147名网络学生在环境科学课堂上的成绩来确定哪种教学模式能产生更好的学生表现。除了总体目标之外,我们还检查了性别和分类之间的分数变化,以确定教学模式是否对特定群体有更大的影响。总体而言,在线和面对面(F2F)学习者在性别或班级排名方面的学生表现没有显著差异。这些数据表明,无论性别或班级级别如何,在传统和在线平台上,非stem专业的学生都有能力类似地转化环境科学概念。利用在线学习的灵活性来教授环境科学核心概念,有可能增加从事公民科学的非stem专业学生的数量。
{"title":"A Comparative Analysis of Student Performance in an Online vs. Face-to-Face Environmental Science Course From 2009 to 2016","authors":"J. Paul, F. Jefferson","doi":"10.3389/fcomp.2019.00007","DOIUrl":"https://doi.org/10.3389/fcomp.2019.00007","url":null,"abstract":"A growing number of students are now opting for online classes. They find the traditional classroom modality restrictive, inflexible, and impractical. In this age of technological advancement, schools can now provide effective classroom teaching via the Web. This shift in pedagogical medium is forcing academic institutions to rethink how they want to deliver their course content. The overarching purpose of this research was to determine which teaching method proved more effective over the eight-year period. The scores of 548 students, 401 traditional students and 147 online students, in an environmental science class were used to determine which instructional modality generated better student performance. In addition to the overarching objective, we also examined score variabilities between genders and classifications to determine if teaching modality had a greater impact on specific groups. No significant difference in student performance between online and face-to-face (F2F) learners overall, with respect to gender, or with respect to class rank were found. These data demonstrate the ability to similarly translate environmental science concepts for non-STEM majors in both traditional and online platforms irrespective of gender or class rank. A potential exists for increasing the number of non-STEM majors engaged in citizen science using the flexibility of online learning to teach environmental science core concepts.","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134080875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 221
On Automatically Assessing Children's Facial Expressions Quality: A Study, Database, and Protocol 儿童面部表情质量的自动评估:研究、数据库和规程
Pub Date : 2019-10-11 DOI: 10.3389/fcomp.2019.00005
Arnaud Dapogny, Charline Grossard, S. Hun, S. Serret, O. Grynszpan, Séverine Dubuisson, David Cohen, Kévin Bailly
While there exists a number of serious games geared towards helping children with ASD to produce facial expressions, most of them fail to provide a precise feedback to help children to adequately learn. In the scope of the JEMImE project, which aims at developing such serious game platform, we introduce throughout this paper a machine learning approach for discriminating between facial expressions and assessing the quality of the emotional display. In particular, we point out the limits in generalization capacities of models trained on adult subjects. To circumvent this issue in the design of our system, we gather a large database depicting children's facial expressions to train and validate the models. We describe our protocol to elicit facial expressions and obtain quality annotations, and empirically show that our models obtain high accuracies in both classification and quality assessment of children's facial expressions. Furthermore, we provide some insight on what the models learn and which features are the most useful to discriminate between the various facial expressions classes and qualities. This new model trained on the dedicated dataset has been integrated into a proof of concept of the serious game. Keywords: Facial Expression Recognition, Expression quality, Random Forests, Emotion, Children, Dataset
虽然有许多严肃的游戏旨在帮助自闭症儿童产生面部表情,但大多数游戏都不能提供精确的反馈来帮助儿童充分学习。在旨在开发这种严肃游戏平台的JEMImE项目的范围内,我们在本文中介绍了一种机器学习方法,用于区分面部表情和评估情绪表现的质量。特别是,我们指出了在成人科目上训练的模型的泛化能力的限制。为了在我们的系统设计中规避这个问题,我们收集了一个描绘儿童面部表情的大型数据库来训练和验证模型。我们描述了我们的面部表情提取和获得高质量注释的协议,并经验表明我们的模型在儿童面部表情分类和质量评估方面都取得了很高的准确性。此外,我们提供了一些关于模型学习的内容以及哪些特征对区分各种面部表情类别和质量最有用的见解。这个在专用数据集上训练的新模型已经集成到严肃游戏的概念证明中。关键词:面部表情识别,表情质量,随机森林,情感,儿童,数据集
{"title":"On Automatically Assessing Children's Facial Expressions Quality: A Study, Database, and Protocol","authors":"Arnaud Dapogny, Charline Grossard, S. Hun, S. Serret, O. Grynszpan, Séverine Dubuisson, David Cohen, Kévin Bailly","doi":"10.3389/fcomp.2019.00005","DOIUrl":"https://doi.org/10.3389/fcomp.2019.00005","url":null,"abstract":"While there exists a number of serious games geared towards helping children with ASD to produce facial expressions, most of them fail to provide a precise feedback to help children to adequately learn. In the scope of the JEMImE project, which aims at developing such serious game platform, we introduce throughout this paper a machine learning approach for discriminating between facial expressions and assessing the quality of the emotional display. In particular, we point out the limits in generalization capacities of models trained on adult subjects. To circumvent this issue in the design of our system, we gather a large database depicting children's facial expressions to train and validate the models. We describe our protocol to elicit facial expressions and obtain quality annotations, and empirically show that our models obtain high accuracies in both classification and quality assessment of children's facial expressions. Furthermore, we provide some insight on what the models learn and which features are the most useful to discriminate between the various facial expressions classes and qualities. This new model trained on the dedicated dataset has been integrated into a proof of concept of the serious game. Keywords: Facial Expression Recognition, Expression quality, Random Forests, Emotion, Children, Dataset","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129202137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Combing K-means Clustering and Local Weighted Maximum Discriminant Projections for Weed Species Recognition 结合k均值聚类和局部加权最大判别投影的杂草物种识别
Pub Date : 2019-09-11 DOI: 10.3389/fcomp.2019.00004
Shanwen Zhang, Jing Guo, Zhen Wang
Abstract: Weed species identification is the premise to control weeds in smart agriculture. It is a challenging topic to control weeds in field, because the weeds in field are quite various and irregular with complex background. An identification method of weed species in crop field is proposed based on Grabcut and local discriminant projections (LWMDP) algorithm. First, Grabcut is used to remove the most background and K-means clustering (KMC) is utilized to segment weeds from the whole image. Then, LWMDP is employed to extract the low-dimensional discriminant features. Finally, the support vector machine (SVM) classifier is adopted to identify weed species. The characteristics of the method are that (1) Grabcut and KMC utilize the texture (color) information and boundary (contrast) information in the image to remove the most of background and obtain the clean weed image, which can reduce the burden of the subsequent feature extraction; (2) LWMDP aims to seek a transformation by the training samples, such that in the low-dimensional feature subspace, the different-class data points are mapped as far as possible while the within-class data points are projected as close as possible, and the matrix inverse computation is ignored in the generalized eigenvalue problem, thus the small sample size (SSS) problem is avoided naturally. The experimental results on the dataset of the weed species images show that the proposed method is effective for weed identification species, and can preliminarily meet the requirements of multi-row spraying of crop based on machine vision.
摘要杂草种类识别是智能农业杂草控制的前提。由于田间杂草种类繁多、不规则且背景复杂,因此田间杂草的防治是一个具有挑战性的课题。提出了一种基于Grabcut和局部判别投影(LWMDP)算法的农田杂草种类识别方法。首先,利用Grabcut去除大部分背景,利用k均值聚类(K-means clustering, KMC)从整个图像中分割杂草。然后,利用LWMDP提取低维判别特征。最后,采用支持向量机(SVM)分类器对杂草种类进行识别。该方法的特点是:(1)Grabcut和KMC利用图像中的纹理(颜色)信息和边界(对比度)信息去除了大部分背景,得到干净的杂草图像,减轻了后续特征提取的负担;(2) LWMDP旨在通过训练样本寻求一种变换,在低维特征子空间中,尽可能地映射不同类别的数据点,而类内数据点则尽可能地接近投影,并且在广义特征值问题中忽略矩阵逆计算,从而自然地避免了小样本问题。在杂草种类图像数据集上的实验结果表明,该方法对杂草种类识别是有效的,可以初步满足基于机器视觉的作物多行喷施的要求。
{"title":"Combing K-means Clustering and Local Weighted Maximum Discriminant Projections for Weed Species Recognition","authors":"Shanwen Zhang, Jing Guo, Zhen Wang","doi":"10.3389/fcomp.2019.00004","DOIUrl":"https://doi.org/10.3389/fcomp.2019.00004","url":null,"abstract":"Abstract: Weed species identification is the premise to control weeds in smart agriculture. It is a challenging topic to control weeds in field, because the weeds in field are quite various and irregular with complex background. An identification method of weed species in crop field is proposed based on Grabcut and local discriminant projections (LWMDP) algorithm. First, Grabcut is used to remove the most background and K-means clustering (KMC) is utilized to segment weeds from the whole image. Then, LWMDP is employed to extract the low-dimensional discriminant features. Finally, the support vector machine (SVM) classifier is adopted to identify weed species. The characteristics of the method are that (1) Grabcut and KMC utilize the texture (color) information and boundary (contrast) information in the image to remove the most of background and obtain the clean weed image, which can reduce the burden of the subsequent feature extraction; (2) LWMDP aims to seek a transformation by the training samples, such that in the low-dimensional feature subspace, the different-class data points are mapped as far as possible while the within-class data points are projected as close as possible, and the matrix inverse computation is ignored in the generalized eigenvalue problem, thus the small sample size (SSS) problem is avoided naturally. The experimental results on the dataset of the weed species images show that the proposed method is effective for weed identification species, and can preliminarily meet the requirements of multi-row spraying of crop based on machine vision.","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115696248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Space Food Experiences: Designing Passenger's Eating Experiences for Future Space Travel Scenarios 太空食物体验:为未来太空旅行场景设计乘客的饮食体验
Pub Date : 2019-07-25 DOI: 10.3389/fcomp.2019.00003
Marianna Obrist, Yunwen Tu, Lining Yao, Carlos Velasco
Given the increasing possibilities of short- and long-term space travel to the Moon and Mars, it is essential not only to design nutritious foods but also to make eating an enjoyable experience. To date, though, most research on space food design has emphasized the functional and nutritional aspects of food, and there are no systematic studies that focus on the human experience of eating in space. It is known, however, that food has a multi-dimensional and multisensorial role in societies and that sensory, hedonic, and social features of eating and food design should not be underestimated. Here, we present how research in the field of Human-Computer Interaction (HCI) can provide a user-centered design approach to co-create innovative ideas around the future of food and eating in space, balancing functional and experiential factors. Based on our research and inspired by advances in human-food interaction design, we have developed three design concepts that integrate and tackle the functional, sensorial, emotional, social, and environmental/ atmospheric aspects of “eating experiences in space”. We can particularly capitalize on recent technological advances around digital fabrication, 3D food printing technology, and virtual and augmented reality to enable the design and integration of multisensory eating experiences. We also highlight that in future space travel, the target users will diversify. In relation to such future users, we need to consider not only astronauts (current users, paid to do the job) but also paying customers (non-astronauts) who will be able to book a space holiday to the Moon or Mars. To create the right conditions for space travel and satisfy those users, we need to innovate beyond the initial excitement of designing an “eating like an astronaut” experience. To do so we can draw upon prior HCI research in human-food interaction design and build on insights from food science and multisensory research, particularly research that has shown that the environments in which we eat and drink, and their multisensory components, can be crucial for an enjoyable food experience.
鉴于到月球和火星进行短期和长期太空旅行的可能性越来越大,不仅要设计有营养的食物,而且要使饮食成为一种愉快的体验。然而,迄今为止,大多数关于太空食品设计的研究都强调了食物的功能和营养方面,并没有系统的研究关注人类在太空中的饮食体验。然而,众所周知,食物在社会中具有多维度和多感官的作用,饮食和食物设计的感官、享乐和社会特征不应被低估。在这里,我们展示了人机交互(HCI)领域的研究如何提供一种以用户为中心的设计方法,围绕未来的食物和空间饮食共同创造创新的想法,平衡功能和体验因素。基于我们的研究,并受到人类与食物交互设计进步的启发,我们开发了三个设计概念,整合并解决了“太空饮食体验”的功能、感官、情感、社交和环境/大气方面的问题。我们可以利用最近在数字制造、3D食品打印技术、虚拟和增强现实等方面的技术进步,设计和整合多感官的饮食体验。我们还强调,在未来的太空旅行中,目标用户将多样化。关于这些未来的用户,我们不仅需要考虑宇航员(目前的付费用户),还需要考虑付费客户(非宇航员),他们将能够预订到月球或火星的太空假期。为了为太空旅行创造合适的条件并满足这些用户,我们需要在设计“像宇航员一样吃饭”体验的最初兴奋之外进行创新。要做到这一点,我们可以借鉴人类与食物交互设计方面的HCI研究,并以食品科学和多感官研究的见解为基础,特别是研究表明,我们吃喝的环境及其多感官成分对于享受食物体验至关重要。
{"title":"Space Food Experiences: Designing Passenger's Eating Experiences for Future Space Travel Scenarios","authors":"Marianna Obrist, Yunwen Tu, Lining Yao, Carlos Velasco","doi":"10.3389/fcomp.2019.00003","DOIUrl":"https://doi.org/10.3389/fcomp.2019.00003","url":null,"abstract":"Given the increasing possibilities of short- and long-term space travel to the Moon and Mars, it is essential not only to design nutritious foods but also to make eating an enjoyable experience. To date, though, most research on space food design has emphasized the functional and nutritional aspects of food, and there are no systematic studies that focus on the human experience of eating in space. It is known, however, that food has a multi-dimensional and multisensorial role in societies and that sensory, hedonic, and social features of eating and food design should not be underestimated. Here, we present how research in the field of Human-Computer Interaction (HCI) can provide a user-centered design approach to co-create innovative ideas around the future of food and eating in space, balancing functional and experiential factors. Based on our research and inspired by advances in human-food interaction design, we have developed three design concepts that integrate and tackle the functional, sensorial, emotional, social, and environmental/ atmospheric aspects of “eating experiences in space”. We can particularly capitalize on recent technological advances around digital fabrication, 3D food printing technology, and virtual and augmented reality to enable the design and integration of multisensory eating experiences. We also highlight that in future space travel, the target users will diversify. In relation to such future users, we need to consider not only astronauts (current users, paid to do the job) but also paying customers (non-astronauts) who will be able to book a space holiday to the Moon or Mars. To create the right conditions for space travel and satisfy those users, we need to innovate beyond the initial excitement of designing an “eating like an astronaut” experience. To do so we can draw upon prior HCI research in human-food interaction design and build on insights from food science and multisensory research, particularly research that has shown that the environments in which we eat and drink, and their multisensory components, can be crucial for an enjoyable food experience.","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129740929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
期刊
Frontiers Comput. Sci.
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1