Pub Date : 2022-04-06DOI: 10.1142/s1793351x22400050
Odysseas Spyroglou, Cagri Uluc Yildirimoglu, A. Koumpis
Researchers and scientists face globally, and parallel to their core research activities, increased pressure to successfully lead or participate in fundraising activities. The field has been experiencing fierce competition with success rates of proposals falling dramatically down, while the complexity of the funding instruments and the need for acquiring a wide understanding of issues related to impacts, research priorities in connection to wider national and transnational (e.g. EU-wide) policy aspects, increase discomfort levels for the individual researchers and scientists. In this paper, we suggest the use of transdisciplinary AI tools to support (semi-)- automation of several steps of the application and proposal preparation processes.
{"title":"Use of AI to Help Researchers Improve their Research Funding Capacities, Relevance, and Performance","authors":"Odysseas Spyroglou, Cagri Uluc Yildirimoglu, A. Koumpis","doi":"10.1142/s1793351x22400050","DOIUrl":"https://doi.org/10.1142/s1793351x22400050","url":null,"abstract":"Researchers and scientists face globally, and parallel to their core research activities, increased pressure to successfully lead or participate in fundraising activities. The field has been experiencing fierce competition with success rates of proposals falling dramatically down, while the complexity of the funding instruments and the need for acquiring a wide understanding of issues related to impacts, research priorities in connection to wider national and transnational (e.g. EU-wide) policy aspects, increase discomfort levels for the individual researchers and scientists. In this paper, we suggest the use of transdisciplinary AI tools to support (semi-)- automation of several steps of the application and proposal preparation processes.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"152 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114801133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-04-04DOI: 10.1142/s1793351x22500015
Piyush Ghasiya, K. Okamura
Cybersecurity is becoming indispensable for everyone and everything in the times of the Internet of Things (IoT) revolution. Every aspect of human society — be it political, financial, technological, or cultural — is affected by cyber-attacks or incidents in one way or another. Newspapers are an excellent source that perfectly captures this web of cybersecurity. By implementing various NLP techniques such as tf-idf, word embedding and sentiment analysis (SA) (machine learning method), this research will examine the cybersecurity-related articles from 18 major newspapers (English language online version) from six countries (three newspapers from each country) collected within one year from April 2018 till March 2019. The first objective is to extract the crucial events from each country, which we will achieve by our first step — ‘information extraction.’ The next objective is to find out what kind of sentiments those crucial issues garnered, which we will accomplish from our second step — ‘SA.’ SA of news articles would also help in understanding each ‘nation’s mood’ on critical cybersecurity issues, which can aid decision-makers in charting new policies.
{"title":"A Hybrid Approach to Analyze Cybersecurity News Articles by Utilizing Information Extraction & Sentiment Analysis Methods","authors":"Piyush Ghasiya, K. Okamura","doi":"10.1142/s1793351x22500015","DOIUrl":"https://doi.org/10.1142/s1793351x22500015","url":null,"abstract":"Cybersecurity is becoming indispensable for everyone and everything in the times of the Internet of Things (IoT) revolution. Every aspect of human society — be it political, financial, technological, or cultural — is affected by cyber-attacks or incidents in one way or another. Newspapers are an excellent source that perfectly captures this web of cybersecurity. By implementing various NLP techniques such as tf-idf, word embedding and sentiment analysis (SA) (machine learning method), this research will examine the cybersecurity-related articles from 18 major newspapers (English language online version) from six countries (three newspapers from each country) collected within one year from April 2018 till March 2019. The first objective is to extract the crucial events from each country, which we will achieve by our first step — ‘information extraction.’ The next objective is to find out what kind of sentiments those crucial issues garnered, which we will accomplish from our second step — ‘SA.’ SA of news articles would also help in understanding each ‘nation’s mood’ on critical cybersecurity issues, which can aid decision-makers in charting new policies.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-04-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116570789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1142/s1793351x21400146
Marcel Tiator, Anna Maria Kerkmann, C. Geiger, P. Grimm
The creation of interactive virtual reality (VR) applications from 3D scanned content usually includes a lot of manual and repetitive work. Our research aim is to develop agents that recognize objects to enhance the creation of interactive VR applications. We trained partition agents in our superpoint growing environment that we extended with an expert function. This expert function solves the sparse reward signal problem of the previous approaches and enables to use a variant of imitation learning and deep reinforcement learning with dense feedback. Additionally, the function allows to calculate a performance metric for the degree of imitation for different partitions. Furthermore, we introduce an environment to optimize the superpoint generation. We trained our agents with 1182 scenes of the ScanNet data set. More specifically, we trained different neural network architectures with 1170 scenes and tested their performance with 12 scenes. Our intermediate results are promising such that our partition system might be able to assist the VR application development from 3D scanned content in near future.
{"title":"US2RO: Union of Superpoints to Recognize Objects","authors":"Marcel Tiator, Anna Maria Kerkmann, C. Geiger, P. Grimm","doi":"10.1142/s1793351x21400146","DOIUrl":"https://doi.org/10.1142/s1793351x21400146","url":null,"abstract":"The creation of interactive virtual reality (VR) applications from 3D scanned content usually includes a lot of manual and repetitive work. Our research aim is to develop agents that recognize objects to enhance the creation of interactive VR applications. We trained partition agents in our superpoint growing environment that we extended with an expert function. This expert function solves the sparse reward signal problem of the previous approaches and enables to use a variant of imitation learning and deep reinforcement learning with dense feedback. Additionally, the function allows to calculate a performance metric for the degree of imitation for different partitions. Furthermore, we introduce an environment to optimize the superpoint generation. We trained our agents with 1182 scenes of the ScanNet data set. More specifically, we trained different neural network architectures with 1170 scenes and tested their performance with 12 scenes. Our intermediate results are promising such that our partition system might be able to assist the VR application development from 3D scanned content in near future.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"52 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124246778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1142/s1793351x21400110
A. Enayet, G. Sukthankar
Good communication is indubitably the foundation of effective teamwork. Over time teams develop their own communication styles and often exhibit entrainment, a conversational phenomena in which humans synchronize their linguistic choices. Conversely, teams may experience conflict due to either personal incompatibility or differing viewpoints. We tackle the problem of predicting team conflict from embeddings learned from multiparty dialogues such that teams with similar post-task conflict scores lie close to one another in vector space. Embeddings were extracted from three types of features: (1) dialogue acts, (2) sentiment polarity, and (3) syntactic entrainment. Machine learning models often suffer domain shift; one advantage of encoding the semantic features is their adaptability across multiple domains. To provide intuition on the generalizability of different embeddings to other goal-oriented teamwork dialogues, we test the effectiveness of learned models trained on the Teams corpus on two other datasets. Unlike syntactic entrainment, both dialogue act and sentiment embeddings are effective for identifying team conflict. Our results show that dialogue act-based embeddings have the potential to generalize better than sentiment and entrainment-based embeddings. These findings have potential ramifications for the development of conversational agents that facilitate teaming.
{"title":"Learning a Generalizable Model of Team Conflict from Multiparty Dialogues","authors":"A. Enayet, G. Sukthankar","doi":"10.1142/s1793351x21400110","DOIUrl":"https://doi.org/10.1142/s1793351x21400110","url":null,"abstract":"Good communication is indubitably the foundation of effective teamwork. Over time teams develop their own communication styles and often exhibit entrainment, a conversational phenomena in which humans synchronize their linguistic choices. Conversely, teams may experience conflict due to either personal incompatibility or differing viewpoints. We tackle the problem of predicting team conflict from embeddings learned from multiparty dialogues such that teams with similar post-task conflict scores lie close to one another in vector space. Embeddings were extracted from three types of features: (1) dialogue acts, (2) sentiment polarity, and (3) syntactic entrainment. Machine learning models often suffer domain shift; one advantage of encoding the semantic features is their adaptability across multiple domains. To provide intuition on the generalizability of different embeddings to other goal-oriented teamwork dialogues, we test the effectiveness of learned models trained on the Teams corpus on two other datasets. Unlike syntactic entrainment, both dialogue act and sentiment embeddings are effective for identifying team conflict. Our results show that dialogue act-based embeddings have the potential to generalize better than sentiment and entrainment-based embeddings. These findings have potential ramifications for the development of conversational agents that facilitate teaming.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123961785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1142/s1793351x21400134
Prakhar Mishra, Chaitali Diwan, S. Srinivasa, G. Srinivasaraghavan
To create curiosity and interest for a topic in online learning is a challenging task. A good preview that outlines the contents of a learning pathway could help learners know the topic and get interested in it. Towards this end, we propose a hierarchical title generation approach to generate semantically relevant titles for the learning resources in a learning pathway and a title for the pathway itself. Our approach to Automatic Title Generation for a given text is based on pre-trained Transformer Language Model GPT-2. A pool of candidate titles are generated and an appropriate title is selected among them which is then refined or de-noised to get the final title. The model is trained on research paper abstracts from arXiv and evaluated on three different test sets. We show that it generates semantically and syntactically relevant titles as reflected in ROUGE, BLEU scores and human evaluations. We propose an optional abstractive Summarizer module based on pre-trained Transformer model T5 to shorten medium length documents. This module is also trained and evaluated on research papers from arXiv dataset. Finally, we show that the proposed model of hierarchical title generation for learning pathways has promising results.
{"title":"Automatic Title Generation for Learning Resources and Pathways with Pre-trained Transformer Models","authors":"Prakhar Mishra, Chaitali Diwan, S. Srinivasa, G. Srinivasaraghavan","doi":"10.1142/s1793351x21400134","DOIUrl":"https://doi.org/10.1142/s1793351x21400134","url":null,"abstract":"To create curiosity and interest for a topic in online learning is a challenging task. A good preview that outlines the contents of a learning pathway could help learners know the topic and get interested in it. Towards this end, we propose a hierarchical title generation approach to generate semantically relevant titles for the learning resources in a learning pathway and a title for the pathway itself. Our approach to Automatic Title Generation for a given text is based on pre-trained Transformer Language Model GPT-2. A pool of candidate titles are generated and an appropriate title is selected among them which is then refined or de-noised to get the final title. The model is trained on research paper abstracts from arXiv and evaluated on three different test sets. We show that it generates semantically and syntactically relevant titles as reflected in ROUGE, BLEU scores and human evaluations. We propose an optional abstractive Summarizer module based on pre-trained Transformer model T5 to shorten medium length documents. This module is also trained and evaluated on research papers from arXiv dataset. Finally, we show that the proposed model of hierarchical title generation for learning pathways has promising results.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129801031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1142/s1793351x21400158
Carolin Straßmann, Alexander Arntz, S. Eimler
As environmental pollution continues to expand, new ways for raising awareness for the consequences need to be explored. Virtual reality has emerged as an effective tool for behavioral change. This paper investigates if virtual reality applications controlled through physical activity can support an even stronger effect, because they enhance attention and recall performance by stimulating working memory through motor functions. This was tested in an experimental study ([Formula: see text]) using a virtual reality head-mounted display in combination with the ICAROS fitness device enabling participants to explore either a plastic-polluted or a non-polluted sea. Results indicated that using a regular controller elicits more presence and a more intense Flow experience than the ICAROS condition, which people controlled via their physical activity. Moreover, the plastic-polluted stimulus was more effective in inducing people’s stated tendency to change their attitude than a non-polluted sea.
{"title":"Inspiring Movement - Physical Activity in a Virtual Sea as a Driver for Ecological Awareness","authors":"Carolin Straßmann, Alexander Arntz, S. Eimler","doi":"10.1142/s1793351x21400158","DOIUrl":"https://doi.org/10.1142/s1793351x21400158","url":null,"abstract":"As environmental pollution continues to expand, new ways for raising awareness for the consequences need to be explored. Virtual reality has emerged as an effective tool for behavioral change. This paper investigates if virtual reality applications controlled through physical activity can support an even stronger effect, because they enhance attention and recall performance by stimulating working memory through motor functions. This was tested in an experimental study ([Formula: see text]) using a virtual reality head-mounted display in combination with the ICAROS fitness device enabling participants to explore either a plastic-polluted or a non-polluted sea. Results indicated that using a regular controller elicits more presence and a more intense Flow experience than the ICAROS condition, which people controlled via their physical activity. Moreover, the plastic-polluted stimulus was more effective in inducing people’s stated tendency to change their attitude than a non-polluted sea.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122345351","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1142/s1793351x21400109
Nhat X. T. Le, A.B. Siddique, Fuad Jamour, Samet Oymak, Vagelis Hristidis
Most existing commercial goal-oriented chatbots are diagram-based; i.e. they follow a rigid dialog flow to fill the slot values needed to achieve a user’s goal. Diagram-based chatbots are predictable, thus their adoption in commercial settings; however, their lack of flexibility may cause many users to leave the conversation before achieving their goal. On the other hand, state-of-the-art research chatbots use Reinforcement Learning (RL) to generate flexible dialog policies. However, such chatbots can be unpredictable, may violate the intended business constraints, and require large training datasets to produce a mature policy. We propose a framework that achieves a middle ground between the diagram-based and RL-based chatbots: we constrain the space of possible chatbot responses using a novel structure, the chatbot dependency graph, and use RL to dynamically select the best valid responses. Dependency graphs are directed graphs that conveniently express a chatbot’s logic by defining the dependencies among slots: all valid dialog flows are encapsulated in one dependency graph. Our experiments in both single-domain and multi-domain settings show that our framework quickly adapts to user characteristics and achieves up to 23.77% improved success rate compared to a state-of-the-art RL model.
{"title":"Generating Predictable and Adaptive Dialog Policies in Single- and Multi-domain Goal-oriented Dialog Systems","authors":"Nhat X. T. Le, A.B. Siddique, Fuad Jamour, Samet Oymak, Vagelis Hristidis","doi":"10.1142/s1793351x21400109","DOIUrl":"https://doi.org/10.1142/s1793351x21400109","url":null,"abstract":"Most existing commercial goal-oriented chatbots are diagram-based; i.e. they follow a rigid dialog flow to fill the slot values needed to achieve a user’s goal. Diagram-based chatbots are predictable, thus their adoption in commercial settings; however, their lack of flexibility may cause many users to leave the conversation before achieving their goal. On the other hand, state-of-the-art research chatbots use Reinforcement Learning (RL) to generate flexible dialog policies. However, such chatbots can be unpredictable, may violate the intended business constraints, and require large training datasets to produce a mature policy. We propose a framework that achieves a middle ground between the diagram-based and RL-based chatbots: we constrain the space of possible chatbot responses using a novel structure, the chatbot dependency graph, and use RL to dynamically select the best valid responses. Dependency graphs are directed graphs that conveniently express a chatbot’s logic by defining the dependencies among slots: all valid dialog flows are encapsulated in one dependency graph. Our experiments in both single-domain and multi-domain settings show that our framework quickly adapts to user characteristics and achieves up to 23.77% improved success rate compared to a state-of-the-art RL model.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124466928","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-12-01DOI: 10.1142/s1793351x21400122
Magnus Bender, Tanya Braun, M. Gehrke, Felix Kuhr, Ralf Möller, Simon Schiff
An agent pursuing a task may work with a corpus of documents as a reference library. Subjective content descriptions (SCDs) provide additional data that add value in the context of the agent’s task. In the pursuit of documents to add to the corpus, an agent may come across new documents where content text and SCDs from another agent are interleaved and no distinction can be made unless the agent knows the content from somewhere else. Therefore, this paper presents a hidden Markov model-based approach to identify SCDs in a new document where SCDs occur inline among content text. Additionally, we present a dictionary selection approach to identify suitable translations for content text and SCDs based on [Formula: see text]-grams. We end with a case study evaluating both approaches based on simulated and real-world data.
{"title":"Identifying and Translating Subjective Content Descriptions Among Texts","authors":"Magnus Bender, Tanya Braun, M. Gehrke, Felix Kuhr, Ralf Möller, Simon Schiff","doi":"10.1142/s1793351x21400122","DOIUrl":"https://doi.org/10.1142/s1793351x21400122","url":null,"abstract":"An agent pursuing a task may work with a corpus of documents as a reference library. Subjective content descriptions (SCDs) provide additional data that add value in the context of the agent’s task. In the pursuit of documents to add to the corpus, an agent may come across new documents where content text and SCDs from another agent are interleaved and no distinction can be made unless the agent knows the content from somewhere else. Therefore, this paper presents a hidden Markov model-based approach to identify SCDs in a new document where SCDs occur inline among content text. Additionally, we present a dictionary selection approach to identify suitable translations for content text and SCDs based on [Formula: see text]-grams. We end with a case study evaluating both approaches based on simulated and real-world data.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132590544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-09-01DOI: 10.1142/s1793351x21400067
Fabian Duerr, H. Weigel, J. Beyerer
One of the key tasks for autonomous vehicles or robots is a robust perception of their 3D environment, which is why autonomous vehicles or robots are equipped with a wide range of different sensors. Building upon a robust sensor setup, understanding and interpreting their 3D environment is the next important step. Semantic segmentation of 3D sensor data, e.g. point clouds, provides valuable information for this task and is often seen as key enabler for 3D scene understanding. This work presents an iterative deep fusion architecture for semantic segmentation of 3D point clouds, which builds upon a range image representation of the point clouds and additionally exploits camera features to increase accuracy and robustness. In contrast to other approaches, which fuse lidar and camera features once, the proposed fusion strategy iteratively combines and refines lidar and camera features at different scales inside the network architecture. Additionally, the proposed approach can deal with camera failure as well as jointly predict lidar and camera segmentation. We demonstrate the benefits of the presented iterative deep fusion approach on two challenging datasets, outperforming all range image-based lidar and fusion approaches. An in-depth evaluation underlines the effectiveness of the proposed fusion strategy and the potential of camera features for 3D semantic segmentation.
{"title":"Decoupled Iterative Deep Sensor Fusion for 3D Semantic Segmentation","authors":"Fabian Duerr, H. Weigel, J. Beyerer","doi":"10.1142/s1793351x21400067","DOIUrl":"https://doi.org/10.1142/s1793351x21400067","url":null,"abstract":"One of the key tasks for autonomous vehicles or robots is a robust perception of their 3D environment, which is why autonomous vehicles or robots are equipped with a wide range of different sensors. Building upon a robust sensor setup, understanding and interpreting their 3D environment is the next important step. Semantic segmentation of 3D sensor data, e.g. point clouds, provides valuable information for this task and is often seen as key enabler for 3D scene understanding. This work presents an iterative deep fusion architecture for semantic segmentation of 3D point clouds, which builds upon a range image representation of the point clouds and additionally exploits camera features to increase accuracy and robustness. In contrast to other approaches, which fuse lidar and camera features once, the proposed fusion strategy iteratively combines and refines lidar and camera features at different scales inside the network architecture. Additionally, the proposed approach can deal with camera failure as well as jointly predict lidar and camera segmentation. We demonstrate the benefits of the presented iterative deep fusion approach on two challenging datasets, outperforming all range image-based lidar and fusion approaches. An in-depth evaluation underlines the effectiveness of the proposed fusion strategy and the potential of camera features for 3D semantic segmentation.","PeriodicalId":217956,"journal":{"name":"Int. J. Semantic Comput.","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126274278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}