{"title":"打破语言学与人工智能之间的界限","authors":"Jinhai Wang, Yi Tie, Xia Jiang, Yilin Xu","doi":"10.4018/joeuc.334013","DOIUrl":null,"url":null,"abstract":"There is a wide connection between linguistics and artificial intelligence (AI), including the multimodal language matching. Multi-modal robots possess the capability to process various sensory modalities, including vision, auditory, language, and touch, offering extensive prospects for applications across various domains. Despite significant advancements in perception and interaction, the task of visual-language matching remains a challenging one for multi-modal robots. Existing methods often struggle to achieve accurate matching when dealing with complex multi-modal data, leading to potential misinterpretation or incomplete understanding of information. Additionally, the heterogeneity among different sensory modalities adds complexity to the matching process. To address these challenges, we propose an approach called vision-language matching with semantically aligned embeddings (VLMS), aimed at improving the visual-language matching performance of multi-modal robots.","PeriodicalId":49029,"journal":{"name":"Journal of Organizational and End User Computing","volume":"12 5","pages":""},"PeriodicalIF":3.6000,"publicationDate":"2023-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Breaking Boundaries Between Linguistics and Artificial Intelligence\",\"authors\":\"Jinhai Wang, Yi Tie, Xia Jiang, Yilin Xu\",\"doi\":\"10.4018/joeuc.334013\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"There is a wide connection between linguistics and artificial intelligence (AI), including the multimodal language matching. Multi-modal robots possess the capability to process various sensory modalities, including vision, auditory, language, and touch, offering extensive prospects for applications across various domains. Despite significant advancements in perception and interaction, the task of visual-language matching remains a challenging one for multi-modal robots. Existing methods often struggle to achieve accurate matching when dealing with complex multi-modal data, leading to potential misinterpretation or incomplete understanding of information. Additionally, the heterogeneity among different sensory modalities adds complexity to the matching process. To address these challenges, we propose an approach called vision-language matching with semantically aligned embeddings (VLMS), aimed at improving the visual-language matching performance of multi-modal robots.\",\"PeriodicalId\":49029,\"journal\":{\"name\":\"Journal of Organizational and End User Computing\",\"volume\":\"12 5\",\"pages\":\"\"},\"PeriodicalIF\":3.6000,\"publicationDate\":\"2023-11-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Journal of Organizational and End User Computing\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://doi.org/10.4018/joeuc.334013\",\"RegionNum\":3,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INFORMATION SYSTEMS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Organizational and End User Computing","FirstCategoryId":"91","ListUrlMain":"https://doi.org/10.4018/joeuc.334013","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
Breaking Boundaries Between Linguistics and Artificial Intelligence
There is a wide connection between linguistics and artificial intelligence (AI), including the multimodal language matching. Multi-modal robots possess the capability to process various sensory modalities, including vision, auditory, language, and touch, offering extensive prospects for applications across various domains. Despite significant advancements in perception and interaction, the task of visual-language matching remains a challenging one for multi-modal robots. Existing methods often struggle to achieve accurate matching when dealing with complex multi-modal data, leading to potential misinterpretation or incomplete understanding of information. Additionally, the heterogeneity among different sensory modalities adds complexity to the matching process. To address these challenges, we propose an approach called vision-language matching with semantically aligned embeddings (VLMS), aimed at improving the visual-language matching performance of multi-modal robots.
期刊介绍:
The Journal of Organizational and End User Computing (JOEUC) provides a forum to information technology educators, researchers, and practitioners to advance the practice and understanding of organizational and end user computing. The journal features a major emphasis on how to increase organizational and end user productivity and performance, and how to achieve organizational strategic and competitive advantage. JOEUC publishes full-length research manuscripts, insightful research and practice notes, and case studies from all areas of organizational and end user computing that are selected after a rigorous blind review by experts in the field.