首页 > 最新文献

Computer science & information technology最新文献

英文 中文
An Object Detection Navigator to Assist the Visually Impaired using Artificial Intelligence and Computer Vision 使用人工智能和计算机视觉辅助视障人士的目标检测导航器
Pub Date : 2021-07-10 DOI: 10.5121/CSIT.2021.111005
Ethan Wu, Jonathan Sahagun, Yu Sun
The advent and worldwide adoption of smartphones has enriched the lives of many people. However, one particular group--the visually impaired--still need specific apps to help them with their daily lives. Thus, I’m developing this Smart app to specifically help the visually-impaired. Specifically, I hope to integrate the functions of Google Maps into the Smart App. While Google Maps functions well as a GPS for the average person without any impairment, I’m adding additional features to the Smart app so that it would guide the eye-sight impaired. For example, I will use the camera of the Smartphone to guide the user such that it would take the user to the desired destination. Thus, using the inherent functions (camera) of a phone, the Smart app can gently and safely guide any sight-impaired person to a predetermined destination by walking. One can think of Smart app as an improvement upon Google Maps -- for the visually impaired.
智能手机的出现和全球范围内的普及丰富了许多人的生活。然而,有一个特定的群体——视障人士——仍然需要特定的应用程序来帮助他们的日常生活。因此,我正在开发这个智能应用程序,专门帮助视障人士。具体来说,我希望将谷歌地图的功能集成到智能应用程序中。虽然谷歌地图的功能对普通人来说就像GPS一样好,没有任何障碍,但我正在为智能应用程序添加额外的功能,以便它能引导视力受损的人。例如,我将使用智能手机的摄像头来引导用户,以便将用户带到所需的目的地。因此,使用手机的固有功能(摄像头),智能应用程序可以温和、安全地引导任何视力受损的人步行到预定的目的地。人们可以将智能应用程序视为对谷歌地图的改进——针对视障人士。
{"title":"An Object Detection Navigator to Assist the Visually Impaired using Artificial Intelligence and Computer Vision","authors":"Ethan Wu, Jonathan Sahagun, Yu Sun","doi":"10.5121/CSIT.2021.111005","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111005","url":null,"abstract":"The advent and worldwide adoption of smartphones has enriched the lives of many people. However, one particular group--the visually impaired--still need specific apps to help them with their daily lives. Thus, I’m developing this Smart app to specifically help the visually-impaired. Specifically, I hope to integrate the functions of Google Maps into the Smart App. While Google Maps functions well as a GPS for the average person without any impairment, I’m adding additional features to the Smart app so that it would guide the eye-sight impaired. For example, I will use the camera of the Smartphone to guide the user such that it would take the user to the desired destination. Thus, using the inherent functions (camera) of a phone, the Smart app can gently and safely guide any sight-impaired person to a predetermined destination by walking. One can think of Smart app as an improvement upon Google Maps -- for the visually impaired.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42443395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Colored Disconnection Numbers of Cellular and Grid Networks 蜂窝和网格网络的彩色断开数
Pub Date : 2021-07-10 DOI: 10.5121/CSIT.2021.111009
Xuqing Bai, Xueliang Li, Yindi Weng
Let G be a nontrivial link-colored connected network. A link-cut R of G is called a rainbow link-cut if no two of its links are colored the same. A link-colored network G is rainbow disconnected if for every two nodes u and v of G, there exists a u-v rainbow link-cut separating them. Such a link coloring is called a rainbow disconnection coloring of G. For a connected network G, the rainbow disconnection number of G, denoted by rd(G), is defined as the smallest number of colors that are needed in order to make G rainbow disconnected. Similarly, there are some other new concepts of network colorings, such as proper disconnection coloring, monochromatic disconnection coloring and rainbow node-disconnection coloring. In this paper, we obtain the exact values of the rainbow (node-)disconnection numbers, proper and monochromatic disconnection numbers of cellular networks and grid networks, respectively.
设G是一个非平凡的链路色连通网络。如果没有两个链路的颜色相同,则称为彩虹链路切割。如果对于G的每两个节点u和v,存在一条u-v彩虹链路切割将它们分开,则链路彩色网络G是彩虹断开的。这样的链路着色称为G的彩虹断连着色。对于连通网络G, G的彩虹断连数rd(G)定义为使G彩虹断连所需的最小颜色数。同样,也有一些新的网络着色概念,如适当断开着色、单色断开着色和彩虹节点断开着色。本文分别得到了蜂窝网络和网格网络的彩虹(节点)断开数、固有断开数和单色断开数的精确值。
{"title":"The Colored Disconnection Numbers of Cellular and Grid Networks","authors":"Xuqing Bai, Xueliang Li, Yindi Weng","doi":"10.5121/CSIT.2021.111009","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111009","url":null,"abstract":"Let G be a nontrivial link-colored connected network. A link-cut R of G is called a rainbow link-cut if no two of its links are colored the same. A link-colored network G is rainbow disconnected if for every two nodes u and v of G, there exists a u-v rainbow link-cut separating them. Such a link coloring is called a rainbow disconnection coloring of G. For a connected network G, the rainbow disconnection number of G, denoted by rd(G), is defined as the smallest number of colors that are needed in order to make G rainbow disconnected. Similarly, there are some other new concepts of network colorings, such as proper disconnection coloring, monochromatic disconnection coloring and rainbow node-disconnection coloring. In this paper, we obtain the exact values of the rainbow (node-)disconnection numbers, proper and monochromatic disconnection numbers of cellular networks and grid networks, respectively.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45850773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Development Framework for a Conversational Agent to Explore Machine Learning Concepts 探索机器学习概念的对话代理开发框架
Pub Date : 2021-07-10 DOI: 10.5121/CSIT.2021.111004
A. Arslan
This study aims to introduce a discussion platform and curriculum designed to help people understand how machines learn. Research shows how to train an agent through dialogue and understand how information is represented using visualization. This paper starts by providing a comprehensive definition of AI literacy based on existing research and integrates a wide range of different subject documents into a set of key AI literacy skills to develop a user-centered AI. This functionality and structural considerations are organized into a conceptual framework based on the literature. Contributions to this paper can be used to initiate discussion and guide future research on AI learning within the computer science community.
本研究旨在介绍一个讨论平台和课程,旨在帮助人们了解机器如何学习。研究展示了如何通过对话来训练智能体,以及如何使用可视化来理解信息是如何表示的。本文首先在现有研究的基础上提供了人工智能素养的全面定义,并将广泛的不同主题文档集成到一套关键的人工智能素养技能中,以开发以用户为中心的人工智能。这些功能和结构上的考虑被组织成一个基于文献的概念框架。对本文的贡献可以用来在计算机科学社区内发起讨论和指导未来的人工智能学习研究。
{"title":"A Development Framework for a Conversational Agent to Explore Machine Learning Concepts","authors":"A. Arslan","doi":"10.5121/CSIT.2021.111004","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111004","url":null,"abstract":"This study aims to introduce a discussion platform and curriculum designed to help people understand how machines learn. Research shows how to train an agent through dialogue and understand how information is represented using visualization. This paper starts by providing a comprehensive definition of AI literacy based on existing research and integrates a wide range of different subject documents into a set of key AI literacy skills to develop a user-centered AI. This functionality and structural considerations are organized into a conceptual framework based on the literature. Contributions to this paper can be used to initiate discussion and guide future research on AI learning within the computer science community.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46114348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Cross-modal Perception in Kirundi 基隆迪语的跨模态感知
Pub Date : 2021-07-10 DOI: 10.5121/CSIT.2021.111007
Emmanuel Ahishakiye
Languages do not always use specific perception words to refer to specific senses. A word from one sense can metaphorically express another physical perception meaning. For Kirundi, findings from a corpus-based analysis revealed a cross-modal polysemy and a bidirectional hierarchy between higher and lower senses. The attested multisensory expression of auditory verb kwûmva ‘hear’ allows us to reduce sense modalities to two –vision and audition. Moreover, the auditory experience verb kwûmva ‘hear’ shows that lower senses can extend to higher senses through the use of synaesthetic metaphor (e.g. kwûmva akamōto ‘lit:hear a smell’/ururírīmbo ruryōshé ‘lit: a tasty song’/ururirimbo ruhimbâye ‘lit: a pleasant song). However, in collocations involving emotion words, it connects perception to emotion (e.g.; kwûmva inzara ‘lit: hear hunger’, kwûmva umunêzēro ‘lit: hear happiness’). This association indicates that perception in Kirundi gets information from both internal and external stimuli. Thus, considering feelings as part of the perception system.
语言并不总是使用特定的感知词来指代特定的感官。来自一种感觉的单词可以隐喻地表达另一种物理感知意义。对于基隆迪语,基于语料库的分析结果揭示了一种跨模态的多义词,以及高等和低等感官之间的双向层次结构。听觉动词kw mva ' hear '的多感官表达使我们能够将感觉模式简化为两种视觉和听觉。此外,听觉体验动词kw mva“听到”表明,低级感官可以通过使用联觉隐喻延伸到高级感官(例如kw mva akamōto‘lit:听到一种气味’/ururírīmbo ruryōshé‘lit:一首美味的歌’/ururirimbo ruhimb ye‘lit:一首愉快的歌)。然而,在涉及情感词的搭配中,它将感知与情感联系起来(例如;kw mva inzara '听到饥饿',kw mva umunêzēro '听到幸福')。这种关联表明,基隆迪语的感知从内部和外部刺激中获取信息。因此,把感觉看作是感知系统的一部分。
{"title":"Cross-modal Perception in Kirundi","authors":"Emmanuel Ahishakiye","doi":"10.5121/CSIT.2021.111007","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111007","url":null,"abstract":"Languages do not always use specific perception words to refer to specific senses. A word from one sense can metaphorically express another physical perception meaning. For Kirundi, findings from a corpus-based analysis revealed a cross-modal polysemy and a bidirectional hierarchy between higher and lower senses. The attested multisensory expression of auditory verb kwûmva ‘hear’ allows us to reduce sense modalities to two –vision and audition. Moreover, the auditory experience verb kwûmva ‘hear’ shows that lower senses can extend to higher senses through the use of synaesthetic metaphor (e.g. kwûmva akamōto ‘lit:hear a smell’/ururírīmbo ruryōshé ‘lit: a tasty song’/ururirimbo ruhimbâye ‘lit: a pleasant song). However, in collocations involving emotion words, it connects perception to emotion (e.g.; kwûmva inzara ‘lit: hear hunger’, kwûmva umunêzēro ‘lit: hear happiness’). This association indicates that perception in Kirundi gets information from both internal and external stimuli. Thus, considering feelings as part of the perception system.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44613398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rational Mobile Application to Detect Language and Compose Annotations: Notespeak App 用于检测语言和撰写注释的Rational移动应用程序:Notespeak应用程序
Pub Date : 2021-06-26 DOI: 10.5121/csit.2021.110918
Ying Ma, Yu Sun
Students in international classroom settings face difficulties comprehending and writing down data shared with them, which causes unnecessary frustration and misunderstanding. However, utilizing digital aids to record and store data can alleviate these issues and ensure comprehension by providing other means of studying/reinforcement. This paper presents an application to actively listen and write down notes for students as teachers instruct class. We applied our application to multiple class settings and company meetings, and conducted a qualitative evaluation of the approach.
在国际课堂环境中,学生在理解和写下与他们共享的数据时面临困难,这会导致不必要的挫折和误解。然而,利用数字辅助工具记录和存储数据可以缓解这些问题,并通过提供其他学习/强化手段来确保理解。本文介绍了一种在教师指导课堂时主动倾听并为学生写下笔记的应用。我们将我们的应用程序应用于多个课堂设置和公司会议,并对该方法进行了定性评估。
{"title":"Rational Mobile Application to Detect Language and Compose Annotations: Notespeak App","authors":"Ying Ma, Yu Sun","doi":"10.5121/csit.2021.110918","DOIUrl":"https://doi.org/10.5121/csit.2021.110918","url":null,"abstract":"Students in international classroom settings face difficulties comprehending and writing down data shared with them, which causes unnecessary frustration and misunderstanding. However, utilizing digital aids to record and store data can alleviate these issues and ensure comprehension by providing other means of studying/reinforcement. This paper presents an application to actively listen and write down notes for students as teachers instruct class. We applied our application to multiple class settings and company meetings, and conducted a qualitative evaluation of the approach.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44795600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Video Note Taking System to Make Online Video Learning Easier 一种使在线视频学习更容易的视频笔记系统
Pub Date : 2021-06-26 DOI: 10.5121/csit.2021.110917
Haocheng Han, Yu Sun
Recent coronavirus lockdowns have had a significant impact on how students study. As states shut down schools, millions of students are now required to study at home with pre-recorded videos. This, however, proves challenging, as teachers have no way of knowing whether or not students are paying attention to the videos, and students may be easily distracted from important parts of the videos. Currently, there is virtually no research and development of applications revolving specifically around the subject of effectively taking digital notes from videos. This paper introduces the web application we developed for streamlined, video-focused auto-schematic note-taking. We applied our application to school-related video lectures and conducted a qualitative evaluation of the approach. The results show that the tools increase productivity when taking notes from a video, and are more effective and informational than conventional paper notes.
最近的冠状病毒封锁对学生的学习方式产生了重大影响。随着各州关闭学校,数百万学生现在被要求在家学习,并预先录制视频。然而,这被证明是具有挑战性的,因为老师无法知道学生是否在关注视频,学生可能很容易从视频的重要部分分心。目前,几乎没有专门围绕从视频中有效提取数字笔记这一主题的应用程序的研究和开发。本文介绍了我们为简化、视频聚焦的自动原理图笔记而开发的web应用程序。我们将我们的应用程序应用于学校相关的视频讲座,并对该方法进行了定性评估。结果表明,这些工具在从视频中做笔记时提高了生产力,并且比传统的纸质笔记更有效、更具信息性。
{"title":"A Video Note Taking System to Make Online Video Learning Easier","authors":"Haocheng Han, Yu Sun","doi":"10.5121/csit.2021.110917","DOIUrl":"https://doi.org/10.5121/csit.2021.110917","url":null,"abstract":"Recent coronavirus lockdowns have had a significant impact on how students study. As states shut down schools, millions of students are now required to study at home with pre-recorded videos. This, however, proves challenging, as teachers have no way of knowing whether or not students are paying attention to the videos, and students may be easily distracted from important parts of the videos. Currently, there is virtually no research and development of applications revolving specifically around the subject of effectively taking digital notes from videos. This paper introduces the web application we developed for streamlined, video-focused auto-schematic note-taking. We applied our application to school-related video lectures and conducted a qualitative evaluation of the approach. The results show that the tools increase productivity when taking notes from a video, and are more effective and informational than conventional paper notes.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47482499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Technique for Removing Unnecessary Superimposed Patterns from Image using Generative Network 基于生成网络的图像冗余模式去除技术
Pub Date : 2021-06-26 DOI: 10.5121/csit.2021.110902
K. Uehira, H. Unno
A technique for removing unnecessary patterns from captured images by using a generative network is studied. The patterns, composed of lines and spaces, are superimposed onto a blue component image of RGB color image when the image is captured for the purpose of acquiring a depth map. The superimposed patterns become unnecessary after the depth map is acquired. We tried to remove these unnecessary patterns by using a generative adversarial network (GAN) and an auto encoder (AE). The experimental results show that the patterns can be removed by using a GAN and AE to the point of being invisible. They also show that the performance of GAN is much higher than that of AE and that its PSNR and SSIM were over 45 and about 0.99, respectively. From the results, we demonstrate the effectiveness of the technique with a GAN.
研究了一种利用生成网络从捕获的图像中去除不必要图案的技术。当为了获取深度图而捕获图像时,由线和空间组成的图案被叠加到RGB彩色图像的蓝色分量图像上。在获取深度图之后,叠加图案变得不必要。我们试图通过使用生成对抗性网络(GAN)和自动编码器(AE)来去除这些不必要的模式。实验结果表明,使用GAN和AE可以将图案去除到不可见的程度。他们还表明,GAN的性能远高于AE,其PSNR和SSIM分别超过45和约0.99。从结果中,我们用GAN证明了该技术的有效性。
{"title":"Technique for Removing Unnecessary Superimposed Patterns from Image using Generative Network","authors":"K. Uehira, H. Unno","doi":"10.5121/csit.2021.110902","DOIUrl":"https://doi.org/10.5121/csit.2021.110902","url":null,"abstract":"A technique for removing unnecessary patterns from captured images by using a generative network is studied. The patterns, composed of lines and spaces, are superimposed onto a blue component image of RGB color image when the image is captured for the purpose of acquiring a depth map. The superimposed patterns become unnecessary after the depth map is acquired. We tried to remove these unnecessary patterns by using a generative adversarial network (GAN) and an auto encoder (AE). The experimental results show that the patterns can be removed by using a GAN and AE to the point of being invisible. They also show that the performance of GAN is much higher than that of AE and that its PSNR and SSIM were over 45 and about 0.99, respectively. From the results, we demonstrate the effectiveness of the technique with a GAN.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46864543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Web Scraper Utilizes Google Street view Images to Power a University Tour Web Scraper利用谷歌街景图像为大学之旅提供动力
Pub Date : 2021-06-26 DOI: 10.5121/csit.2021.110916
Peiyuan Sun, Yu Sun
Due to the outbreak of the Covid-19 pandemic, college tours are no longer available, so many students have lost the opportunity to see their dream school’s campus. To solve this problem, we developed a product called “Virtourgo,” a university virtual tour website that uses Google Street View images gathered from a web scraper allowing students to see what college campuses are like even when tours are unavailable during the pandemic. The project consists of 3/4 parts: the web scraper script, the GitHub server, the Google Domains DNS Server, and the HTML files. Some challenges we met include scraping repeated pictures and letting the HTML dropdown menu jump to the correct location. We solved these by implementing Python and Javascript functions that specifically target such challenges. Finally, after experimenting with all the functions of the web scraper and website, we confirmed that it works as expected and can scrape and deliver tours of any university campus or public buildings we want.
由于新冠肺炎疫情的爆发,大学参观不再提供,因此许多学生失去了参观梦想学校校园的机会。为了解决这个问题,我们开发了一款名为“Virtourgo”的产品,这是一个大学虚拟游览网站,它使用从网页抓取器收集的谷歌街景图像,让学生即使在疫情期间无法游览时也能看到大学校园的样子。该项目由3/4部分组成:web scraper脚本,GitHub服务器,谷歌域DNS服务器和HTML文件。我们遇到的一些挑战包括抓取重复的图片,以及让HTML下拉菜单跳转到正确的位置。我们通过实现专门针对此类挑战的Python和Javascript函数来解决这些问题。最后,在对web scraper和网站的所有功能进行了实验之后,我们确认了它的工作效果和预期的一样,可以抓取并提供我们想要的任何大学校园或公共建筑的游览。
{"title":"Web Scraper Utilizes Google Street view Images to Power a University Tour","authors":"Peiyuan Sun, Yu Sun","doi":"10.5121/csit.2021.110916","DOIUrl":"https://doi.org/10.5121/csit.2021.110916","url":null,"abstract":"Due to the outbreak of the Covid-19 pandemic, college tours are no longer available, so many students have lost the opportunity to see their dream school’s campus. To solve this problem, we developed a product called “Virtourgo,” a university virtual tour website that uses Google Street View images gathered from a web scraper allowing students to see what college campuses are like even when tours are unavailable during the pandemic. The project consists of 3/4 parts: the web scraper script, the GitHub server, the Google Domains DNS Server, and the HTML files. Some challenges we met include scraping repeated pictures and letting the HTML dropdown menu jump to the correct location. We solved these by implementing Python and Javascript functions that specifically target such challenges. Finally, after experimenting with all the functions of the web scraper and website, we confirmed that it works as expected and can scrape and deliver tours of any university campus or public buildings we want.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42987368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Intelligent System to Enhance Visually-Impaired Navigation and Disaster Assistance using Geo-Based Positioning and Machine Learning 利用地理定位和机器学习增强视障导航和灾难援助的智能系统
Pub Date : 2021-06-26 DOI: 10.5121/csit.2021.110907
Wen Liang, Ishmael Rico, Yu Sun
Technological advancement has brought many the convenience that the society used to lack, but unnoticed by many, a population neglected through the age of technology has been the visually impaired population. The visually impaired population has grown through ages with as much desire as everyone else to adventure but lack the confidence and support to do so. Time has transported society to a new phase condensed in big data, but to the visually impaired population, this quick-pace living lifestyle, along with the unpredictable natural disaster and COVID-19 pandemic, has dropped them deeper into a feeling of disconnection from the society. Our application uses the global positioning system to supportthe visually impaired in independent navigation, alerts them in face of natural disasters, and remindsthem to sanitize their devices during the COVID-19 pandemic.
技术进步带来了许多社会过去缺乏的便利,但许多人没有注意到,在技术时代被忽视的人群是视障人群。视障人群随着年龄的增长,与其他人一样渴望冒险,但缺乏冒险的信心和支持。时间将社会带入了一个浓缩在大数据中的新阶段,但对于视障人群来说,这种快速生活方式,以及不可预测的自然灾害和新冠肺炎大流行,让他们更深地陷入了与社会脱节的感觉。我们的应用程序使用全球定位系统支持视障人士独立导航,在发生自然灾害时向他们发出警报,并在新冠肺炎大流行期间提醒他们对设备进行消毒。
{"title":"An Intelligent System to Enhance Visually-Impaired Navigation and Disaster Assistance using Geo-Based Positioning and Machine Learning","authors":"Wen Liang, Ishmael Rico, Yu Sun","doi":"10.5121/csit.2021.110907","DOIUrl":"https://doi.org/10.5121/csit.2021.110907","url":null,"abstract":"Technological advancement has brought many the convenience that the society used to lack, but unnoticed by many, a population neglected through the age of technology has been the visually impaired population. The visually impaired population has grown through ages with as much desire as everyone else to adventure but lack the confidence and support to do so. Time has transported society to a new phase condensed in big data, but to the visually impaired population, this quick-pace living lifestyle, along with the unpredictable natural disaster and COVID-19 pandemic, has dropped them deeper into a feeling of disconnection from the society. Our application uses the global positioning system to supportthe visually impaired in independent navigation, alerts them in face of natural disasters, and remindsthem to sanitize their devices during the COVID-19 pandemic.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49015758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive Filtering Remote Sensing Image Segmentation Network based on Attention Mechanism 基于注意力机制的自适应滤波遥感图像分割网络
Pub Date : 2021-06-26 DOI: 10.5121/csit.2021.110903
Cong zhong Wu, Hao Dong, Xuan jie Lin, Han tong Jiang, L. Wang, Xin zhi Liu, Wei kai Shi
It is difficult to segment small objects and the edge of the object because of larger-scale variation, larger intra-class variance of background and foreground-background imbalance in the remote sensing imagery. In convolutional neural networks, high frequency signals may degenerate into completely different ones after downsampling. We define this phenomenon as aliasing. Meanwhile, although dilated convolution can expand the receptive field of feature map, a much more complex background can cause serious alarms. To alleviate the above problems, we propose an attention-based mechanism adaptive filtered segmentation network. Experimental results on the Deepglobe Road Extraction dataset and Inria Aerial Image Labeling dataset showed that our method can effectively improve the segmentation accuracy. The F1 value on the two data sets reached 82.67% and 85.71% respectively.
由于遥感图像的尺度变化较大,类内背景变化较大,前景与背景不平衡,给小目标和目标边缘的分割带来困难。在卷积神经网络中,高频信号经过降采样后可能退化成完全不同的信号。我们将这种现象定义为混叠。同时,尽管扩张卷积可以扩大特征图的接受域,但更复杂的背景可能会引起严重的警报。为了解决上述问题,我们提出了一种基于注意的自适应滤波分割网络机制。在Deepglobe道路提取数据集和Inria航拍图像标记数据集上的实验结果表明,该方法可以有效地提高分割精度。两个数据集上的F1值分别达到82.67%和85.71%。
{"title":"Adaptive Filtering Remote Sensing Image Segmentation Network based on Attention Mechanism","authors":"Cong zhong Wu, Hao Dong, Xuan jie Lin, Han tong Jiang, L. Wang, Xin zhi Liu, Wei kai Shi","doi":"10.5121/csit.2021.110903","DOIUrl":"https://doi.org/10.5121/csit.2021.110903","url":null,"abstract":"It is difficult to segment small objects and the edge of the object because of larger-scale variation, larger intra-class variance of background and foreground-background imbalance in the remote sensing imagery. In convolutional neural networks, high frequency signals may degenerate into completely different ones after downsampling. We define this phenomenon as aliasing. Meanwhile, although dilated convolution can expand the receptive field of feature map, a much more complex background can cause serious alarms. To alleviate the above problems, we propose an attention-based mechanism adaptive filtered segmentation network. Experimental results on the Deepglobe Road Extraction dataset and Inria Aerial Image Labeling dataset showed that our method can effectively improve the segmentation accuracy. The F1 value on the two data sets reached 82.67% and 85.71% respectively.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47227971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Computer science & information technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1