Pub Date : 2022-12-12DOI: 10.1007/s11704-022-2050-4
M. Maqsood, Sadaf Yasmin, S. Gillani, Maryam Bukhari, Seung-Ryong Rho, Sang-Soo Yeo
{"title":"An efficient deep learning-assisted person re-identification solution for intelligent video surveillance in smart cities","authors":"M. Maqsood, Sadaf Yasmin, S. Gillani, Maryam Bukhari, Seung-Ryong Rho, Sang-Soo Yeo","doi":"10.1007/s11704-022-2050-4","DOIUrl":"https://doi.org/10.1007/s11704-022-2050-4","url":null,"abstract":"","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115140174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-12-08DOI: 10.3389/fcomp.2022.1020168
G. Guidoni, João Paulo A. Almeida, G. Guizzardi
Forward engineering relational schemas based on conceptual models (in languages such as UML and ER) is an established practice, with several automated transformation approaches discussed in the literature and implemented in production tools. These transformations must bridge the gap between the primitives offered by conceptual modeling languages on the one hand and the relational model on the other. As a result, it is often the case that some of the semantics of the source conceptual model is lost in the transformation process. In this paper, we address this problem by forward engineering additional constraints along with the transformed schema (ultimately implemented as triggers). We formulate our approach in terms of the operations of “flattening” and “lifting” of classes to make our approach largely independent of the particular transformation strategy (one table per hierarchy, one table per class, one table per concrete class, one table per leaf class, etc.). An automated transformation tool is provided that traces the cumulative consequences of the operations as they are applied throughout the transformation process. We report on tests of this tool using models published in an open model repository.
{"title":"Preserving conceptual model semantics in the forward engineering of relational schemas","authors":"G. Guidoni, João Paulo A. Almeida, G. Guizzardi","doi":"10.3389/fcomp.2022.1020168","DOIUrl":"https://doi.org/10.3389/fcomp.2022.1020168","url":null,"abstract":"Forward engineering relational schemas based on conceptual models (in languages such as UML and ER) is an established practice, with several automated transformation approaches discussed in the literature and implemented in production tools. These transformations must bridge the gap between the primitives offered by conceptual modeling languages on the one hand and the relational model on the other. As a result, it is often the case that some of the semantics of the source conceptual model is lost in the transformation process. In this paper, we address this problem by forward engineering additional constraints along with the transformed schema (ultimately implemented as triggers). We formulate our approach in terms of the operations of “flattening” and “lifting” of classes to make our approach largely independent of the particular transformation strategy (one table per hierarchy, one table per class, one table per concrete class, one table per leaf class, etc.). An automated transformation tool is provided that traces the cumulative consequences of the operations as they are applied throughout the transformation process. We report on tests of this tool using models published in an open model repository.","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134337935","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-11-24DOI: 10.3389/fcomp.2021.718340
Florian Henkel, G. Widmer
The task of real-time alignment between a music performance and the corresponding score (sheet music), also known as score following, poses a challenging multi-modal machine learning problem. Training a system that can solve this task robustly with live audio and real sheet music (i.e., scans or score images) requires precise ground truth alignments between audio and note-coordinate positions in the score sheet images. However, these kinds of annotations are difficult and costly to obtain, which is why research in this area mainly utilizes synthetic audio and sheet images to train and evaluate score following systems. In this work, we propose a method that does not solely rely on note alignments but is additionally capable of leveraging data with annotations of lower granularity, such as bar or score system alignments. This allows us to use a large collection of real-world piano performance recordings coarsely aligned to scanned score sheet images and, as a consequence, improve over current state-of-the-art approaches.
{"title":"Real-Time Music Following in Score Sheet Images via Multi-Resolution Prediction","authors":"Florian Henkel, G. Widmer","doi":"10.3389/fcomp.2021.718340","DOIUrl":"https://doi.org/10.3389/fcomp.2021.718340","url":null,"abstract":"The task of real-time alignment between a music performance and the corresponding score (sheet music), also known as score following, poses a challenging multi-modal machine learning problem. Training a system that can solve this task robustly with live audio and real sheet music (i.e., scans or score images) requires precise ground truth alignments between audio and note-coordinate positions in the score sheet images. However, these kinds of annotations are difficult and costly to obtain, which is why research in this area mainly utilizes synthetic audio and sheet images to train and evaluate score following systems. In this work, we propose a method that does not solely rely on note alignments but is additionally capable of leveraging data with annotations of lower granularity, such as bar or score system alignments. This allows us to use a large collection of real-world piano performance recordings coarsely aligned to scanned score sheet images and, as a consequence, improve over current state-of-the-art approaches.","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126186330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-26DOI: 10.3389/fcomp.2019.00010
P. Baniukiewicz, E. Lutton, Sharon Collier, T. Bretschneider
Generative adversarial networks (GANs) have recently been successfully used to create realistic synthetic microscopy cell images in 2D and predict intermediate cell stages. In the current paper we highlight that GANs can not only be used for creating synthetic cell images optimized for different fluorescent molecular labels, but that by using GANs for augmentation of training data involving scaling or other transformations the inherent length scale of biological structures is retained. In addition, GANs make it possible to create synthetic cells with specific shape features, which can be used, for example, to validate different methods for feature extraction. Here, we apply GANs to create 2D distributions of fluorescent markers for F-actin in the cell cortex of Dictyostelium cells (ABD), a membrane receptor (cAR1), and a cortex-membrane linker protein (TalA). The recent more widespread use of 3D lightsheet microscopy, where obtaining sufficient training data is considerably more difficult than in 2D, creates significant demand for novel approaches to data augmentation. We show that it is possible to directly generate synthetic 3D cell images using GANs, but limitations are excessive training times, dependence on high-quality segmentations of 3D images, and that the number of z-slices cannot be freely adjusted without retraining the network. We demonstrate that in the case of molecular labels that are highly correlated with cell shape, like F-actin in our example, 2D GANs can be used efficiently to create pseudo-3D synthetic cell data from individually generated 2D slices. Because high quality segmented 2D cell data are more readily available, this is an attractive alternative to using less efficient 3D networks.
{"title":"Generative Adversarial Networks for Augmenting Training Data of Microscopic Cell Images","authors":"P. Baniukiewicz, E. Lutton, Sharon Collier, T. Bretschneider","doi":"10.3389/fcomp.2019.00010","DOIUrl":"https://doi.org/10.3389/fcomp.2019.00010","url":null,"abstract":"Generative adversarial networks (GANs) have recently been successfully used to create realistic synthetic microscopy cell images in 2D and predict intermediate cell stages. In the current paper we highlight that GANs can not only be used for creating synthetic cell images optimized for different fluorescent molecular labels, but that by using GANs for augmentation of training data involving scaling or other transformations the inherent length scale of biological structures is retained. In addition, GANs make it possible to create synthetic cells with specific shape features, which can be used, for example, to validate different methods for feature extraction. Here, we apply GANs to create 2D distributions of fluorescent markers for F-actin in the cell cortex of Dictyostelium cells (ABD), a membrane receptor (cAR1), and a cortex-membrane linker protein (TalA). The recent more widespread use of 3D lightsheet microscopy, where obtaining sufficient training data is considerably more difficult than in 2D, creates significant demand for novel approaches to data augmentation. We show that it is possible to directly generate synthetic 3D cell images using GANs, but limitations are excessive training times, dependence on high-quality segmentations of 3D images, and that the number of z-slices cannot be freely adjusted without retraining the network. We demonstrate that in the case of molecular labels that are highly correlated with cell shape, like F-actin in our example, 2D GANs can be used efficiently to create pseudo-3D synthetic cell data from individually generated 2D slices. Because high quality segmented 2D cell data are more readily available, this is an attractive alternative to using less efficient 3D networks.","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"117 48","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120826019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-11-12DOI: 10.3389/fcomp.2019.00007
J. Paul, F. Jefferson
A growing number of students are now opting for online classes. They find the traditional classroom modality restrictive, inflexible, and impractical. In this age of technological advancement, schools can now provide effective classroom teaching via the Web. This shift in pedagogical medium is forcing academic institutions to rethink how they want to deliver their course content. The overarching purpose of this research was to determine which teaching method proved more effective over the eight-year period. The scores of 548 students, 401 traditional students and 147 online students, in an environmental science class were used to determine which instructional modality generated better student performance. In addition to the overarching objective, we also examined score variabilities between genders and classifications to determine if teaching modality had a greater impact on specific groups. No significant difference in student performance between online and face-to-face (F2F) learners overall, with respect to gender, or with respect to class rank were found. These data demonstrate the ability to similarly translate environmental science concepts for non-STEM majors in both traditional and online platforms irrespective of gender or class rank. A potential exists for increasing the number of non-STEM majors engaged in citizen science using the flexibility of online learning to teach environmental science core concepts.
{"title":"A Comparative Analysis of Student Performance in an Online vs. Face-to-Face Environmental Science Course From 2009 to 2016","authors":"J. Paul, F. Jefferson","doi":"10.3389/fcomp.2019.00007","DOIUrl":"https://doi.org/10.3389/fcomp.2019.00007","url":null,"abstract":"A growing number of students are now opting for online classes. They find the traditional classroom modality restrictive, inflexible, and impractical. In this age of technological advancement, schools can now provide effective classroom teaching via the Web. This shift in pedagogical medium is forcing academic institutions to rethink how they want to deliver their course content. The overarching purpose of this research was to determine which teaching method proved more effective over the eight-year period. The scores of 548 students, 401 traditional students and 147 online students, in an environmental science class were used to determine which instructional modality generated better student performance. In addition to the overarching objective, we also examined score variabilities between genders and classifications to determine if teaching modality had a greater impact on specific groups. No significant difference in student performance between online and face-to-face (F2F) learners overall, with respect to gender, or with respect to class rank were found. These data demonstrate the ability to similarly translate environmental science concepts for non-STEM majors in both traditional and online platforms irrespective of gender or class rank. A potential exists for increasing the number of non-STEM majors engaged in citizen science using the flexibility of online learning to teach environmental science core concepts.","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"40 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-11-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134080875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-10-11DOI: 10.3389/fcomp.2019.00005
Arnaud Dapogny, Charline Grossard, S. Hun, S. Serret, O. Grynszpan, Séverine Dubuisson, David Cohen, Kévin Bailly
While there exists a number of serious games geared towards helping children with ASD to produce facial expressions, most of them fail to provide a precise feedback to help children to adequately learn. In the scope of the JEMImE project, which aims at developing such serious game platform, we introduce throughout this paper a machine learning approach for discriminating between facial expressions and assessing the quality of the emotional display. In particular, we point out the limits in generalization capacities of models trained on adult subjects. To circumvent this issue in the design of our system, we gather a large database depicting children's facial expressions to train and validate the models. We describe our protocol to elicit facial expressions and obtain quality annotations, and empirically show that our models obtain high accuracies in both classification and quality assessment of children's facial expressions. Furthermore, we provide some insight on what the models learn and which features are the most useful to discriminate between the various facial expressions classes and qualities. This new model trained on the dedicated dataset has been integrated into a proof of concept of the serious game. Keywords: Facial Expression Recognition, Expression quality, Random Forests, Emotion, Children, Dataset
{"title":"On Automatically Assessing Children's Facial Expressions Quality: A Study, Database, and Protocol","authors":"Arnaud Dapogny, Charline Grossard, S. Hun, S. Serret, O. Grynszpan, Séverine Dubuisson, David Cohen, Kévin Bailly","doi":"10.3389/fcomp.2019.00005","DOIUrl":"https://doi.org/10.3389/fcomp.2019.00005","url":null,"abstract":"While there exists a number of serious games geared towards helping children with ASD to produce facial expressions, most of them fail to provide a precise feedback to help children to adequately learn. In the scope of the JEMImE project, which aims at developing such serious game platform, we introduce throughout this paper a machine learning approach for discriminating between facial expressions and assessing the quality of the emotional display. In particular, we point out the limits in generalization capacities of models trained on adult subjects. To circumvent this issue in the design of our system, we gather a large database depicting children's facial expressions to train and validate the models. We describe our protocol to elicit facial expressions and obtain quality annotations, and empirically show that our models obtain high accuracies in both classification and quality assessment of children's facial expressions. Furthermore, we provide some insight on what the models learn and which features are the most useful to discriminate between the various facial expressions classes and qualities. This new model trained on the dedicated dataset has been integrated into a proof of concept of the serious game. Keywords: Facial Expression Recognition, Expression quality, Random Forests, Emotion, Children, Dataset","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129202137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-11DOI: 10.3389/fcomp.2019.00004
Shanwen Zhang, Jing Guo, Zhen Wang
Abstract: Weed species identification is the premise to control weeds in smart agriculture. It is a challenging topic to control weeds in field, because the weeds in field are quite various and irregular with complex background. An identification method of weed species in crop field is proposed based on Grabcut and local discriminant projections (LWMDP) algorithm. First, Grabcut is used to remove the most background and K-means clustering (KMC) is utilized to segment weeds from the whole image. Then, LWMDP is employed to extract the low-dimensional discriminant features. Finally, the support vector machine (SVM) classifier is adopted to identify weed species. The characteristics of the method are that (1) Grabcut and KMC utilize the texture (color) information and boundary (contrast) information in the image to remove the most of background and obtain the clean weed image, which can reduce the burden of the subsequent feature extraction; (2) LWMDP aims to seek a transformation by the training samples, such that in the low-dimensional feature subspace, the different-class data points are mapped as far as possible while the within-class data points are projected as close as possible, and the matrix inverse computation is ignored in the generalized eigenvalue problem, thus the small sample size (SSS) problem is avoided naturally. The experimental results on the dataset of the weed species images show that the proposed method is effective for weed identification species, and can preliminarily meet the requirements of multi-row spraying of crop based on machine vision.
{"title":"Combing K-means Clustering and Local Weighted Maximum Discriminant Projections for Weed Species Recognition","authors":"Shanwen Zhang, Jing Guo, Zhen Wang","doi":"10.3389/fcomp.2019.00004","DOIUrl":"https://doi.org/10.3389/fcomp.2019.00004","url":null,"abstract":"Abstract: Weed species identification is the premise to control weeds in smart agriculture. It is a challenging topic to control weeds in field, because the weeds in field are quite various and irregular with complex background. An identification method of weed species in crop field is proposed based on Grabcut and local discriminant projections (LWMDP) algorithm. First, Grabcut is used to remove the most background and K-means clustering (KMC) is utilized to segment weeds from the whole image. Then, LWMDP is employed to extract the low-dimensional discriminant features. Finally, the support vector machine (SVM) classifier is adopted to identify weed species. The characteristics of the method are that (1) Grabcut and KMC utilize the texture (color) information and boundary (contrast) information in the image to remove the most of background and obtain the clean weed image, which can reduce the burden of the subsequent feature extraction; (2) LWMDP aims to seek a transformation by the training samples, such that in the low-dimensional feature subspace, the different-class data points are mapped as far as possible while the within-class data points are projected as close as possible, and the matrix inverse computation is ignored in the generalized eigenvalue problem, thus the small sample size (SSS) problem is avoided naturally. The experimental results on the dataset of the weed species images show that the proposed method is effective for weed identification species, and can preliminarily meet the requirements of multi-row spraying of crop based on machine vision.","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"177 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115696248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-25DOI: 10.3389/fcomp.2019.00003
Marianna Obrist, Yunwen Tu, Lining Yao, Carlos Velasco
Given the increasing possibilities of short- and long-term space travel to the Moon and Mars, it is essential not only to design nutritious foods but also to make eating an enjoyable experience. To date, though, most research on space food design has emphasized the functional and nutritional aspects of food, and there are no systematic studies that focus on the human experience of eating in space. It is known, however, that food has a multi-dimensional and multisensorial role in societies and that sensory, hedonic, and social features of eating and food design should not be underestimated. Here, we present how research in the field of Human-Computer Interaction (HCI) can provide a user-centered design approach to co-create innovative ideas around the future of food and eating in space, balancing functional and experiential factors. Based on our research and inspired by advances in human-food interaction design, we have developed three design concepts that integrate and tackle the functional, sensorial, emotional, social, and environmental/ atmospheric aspects of “eating experiences in space”. We can particularly capitalize on recent technological advances around digital fabrication, 3D food printing technology, and virtual and augmented reality to enable the design and integration of multisensory eating experiences. We also highlight that in future space travel, the target users will diversify. In relation to such future users, we need to consider not only astronauts (current users, paid to do the job) but also paying customers (non-astronauts) who will be able to book a space holiday to the Moon or Mars. To create the right conditions for space travel and satisfy those users, we need to innovate beyond the initial excitement of designing an “eating like an astronaut” experience. To do so we can draw upon prior HCI research in human-food interaction design and build on insights from food science and multisensory research, particularly research that has shown that the environments in which we eat and drink, and their multisensory components, can be crucial for an enjoyable food experience.
{"title":"Space Food Experiences: Designing Passenger's Eating Experiences for Future Space Travel Scenarios","authors":"Marianna Obrist, Yunwen Tu, Lining Yao, Carlos Velasco","doi":"10.3389/fcomp.2019.00003","DOIUrl":"https://doi.org/10.3389/fcomp.2019.00003","url":null,"abstract":"Given the increasing possibilities of short- and long-term space travel to the Moon and Mars, it is essential not only to design nutritious foods but also to make eating an enjoyable experience. To date, though, most research on space food design has emphasized the functional and nutritional aspects of food, and there are no systematic studies that focus on the human experience of eating in space. It is known, however, that food has a multi-dimensional and multisensorial role in societies and that sensory, hedonic, and social features of eating and food design should not be underestimated. Here, we present how research in the field of Human-Computer Interaction (HCI) can provide a user-centered design approach to co-create innovative ideas around the future of food and eating in space, balancing functional and experiential factors. Based on our research and inspired by advances in human-food interaction design, we have developed three design concepts that integrate and tackle the functional, sensorial, emotional, social, and environmental/ atmospheric aspects of “eating experiences in space”. We can particularly capitalize on recent technological advances around digital fabrication, 3D food printing technology, and virtual and augmented reality to enable the design and integration of multisensory eating experiences. We also highlight that in future space travel, the target users will diversify. In relation to such future users, we need to consider not only astronauts (current users, paid to do the job) but also paying customers (non-astronauts) who will be able to book a space holiday to the Moon or Mars. To create the right conditions for space travel and satisfy those users, we need to innovate beyond the initial excitement of designing an “eating like an astronaut” experience. To do so we can draw upon prior HCI research in human-food interaction design and build on insights from food science and multisensory research, particularly research that has shown that the environments in which we eat and drink, and their multisensory components, can be crucial for an enjoyable food experience.","PeriodicalId":305963,"journal":{"name":"Frontiers Comput. Sci.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-07-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129740929","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}