Pub Date : 2021-07-10DOI: 10.5121/CSIT.2021.111005
Ethan Wu, Jonathan Sahagun, Yu Sun
The advent and worldwide adoption of smartphones has enriched the lives of many people. However, one particular group--the visually impaired--still need specific apps to help them with their daily lives. Thus, I’m developing this Smart app to specifically help the visually-impaired. Specifically, I hope to integrate the functions of Google Maps into the Smart App. While Google Maps functions well as a GPS for the average person without any impairment, I’m adding additional features to the Smart app so that it would guide the eye-sight impaired. For example, I will use the camera of the Smartphone to guide the user such that it would take the user to the desired destination. Thus, using the inherent functions (camera) of a phone, the Smart app can gently and safely guide any sight-impaired person to a predetermined destination by walking. One can think of Smart app as an improvement upon Google Maps -- for the visually impaired.
{"title":"An Object Detection Navigator to Assist the Visually Impaired using Artificial Intelligence and Computer Vision","authors":"Ethan Wu, Jonathan Sahagun, Yu Sun","doi":"10.5121/CSIT.2021.111005","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111005","url":null,"abstract":"The advent and worldwide adoption of smartphones has enriched the lives of many people. However, one particular group--the visually impaired--still need specific apps to help them with their daily lives. Thus, I’m developing this Smart app to specifically help the visually-impaired. Specifically, I hope to integrate the functions of Google Maps into the Smart App. While Google Maps functions well as a GPS for the average person without any impairment, I’m adding additional features to the Smart app so that it would guide the eye-sight impaired. For example, I will use the camera of the Smartphone to guide the user such that it would take the user to the desired destination. Thus, using the inherent functions (camera) of a phone, the Smart app can gently and safely guide any sight-impaired person to a predetermined destination by walking. One can think of Smart app as an improvement upon Google Maps -- for the visually impaired.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42443395","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-10DOI: 10.5121/CSIT.2021.111009
Xuqing Bai, Xueliang Li, Yindi Weng
Let G be a nontrivial link-colored connected network. A link-cut R of G is called a rainbow link-cut if no two of its links are colored the same. A link-colored network G is rainbow disconnected if for every two nodes u and v of G, there exists a u-v rainbow link-cut separating them. Such a link coloring is called a rainbow disconnection coloring of G. For a connected network G, the rainbow disconnection number of G, denoted by rd(G), is defined as the smallest number of colors that are needed in order to make G rainbow disconnected. Similarly, there are some other new concepts of network colorings, such as proper disconnection coloring, monochromatic disconnection coloring and rainbow node-disconnection coloring. In this paper, we obtain the exact values of the rainbow (node-)disconnection numbers, proper and monochromatic disconnection numbers of cellular networks and grid networks, respectively.
{"title":"The Colored Disconnection Numbers of Cellular and Grid Networks","authors":"Xuqing Bai, Xueliang Li, Yindi Weng","doi":"10.5121/CSIT.2021.111009","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111009","url":null,"abstract":"Let G be a nontrivial link-colored connected network. A link-cut R of G is called a rainbow link-cut if no two of its links are colored the same. A link-colored network G is rainbow disconnected if for every two nodes u and v of G, there exists a u-v rainbow link-cut separating them. Such a link coloring is called a rainbow disconnection coloring of G. For a connected network G, the rainbow disconnection number of G, denoted by rd(G), is defined as the smallest number of colors that are needed in order to make G rainbow disconnected. Similarly, there are some other new concepts of network colorings, such as proper disconnection coloring, monochromatic disconnection coloring and rainbow node-disconnection coloring. In this paper, we obtain the exact values of the rainbow (node-)disconnection numbers, proper and monochromatic disconnection numbers of cellular networks and grid networks, respectively.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45850773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-10DOI: 10.5121/CSIT.2021.111004
A. Arslan
This study aims to introduce a discussion platform and curriculum designed to help people understand how machines learn. Research shows how to train an agent through dialogue and understand how information is represented using visualization. This paper starts by providing a comprehensive definition of AI literacy based on existing research and integrates a wide range of different subject documents into a set of key AI literacy skills to develop a user-centered AI. This functionality and structural considerations are organized into a conceptual framework based on the literature. Contributions to this paper can be used to initiate discussion and guide future research on AI learning within the computer science community.
{"title":"A Development Framework for a Conversational Agent to Explore Machine Learning Concepts","authors":"A. Arslan","doi":"10.5121/CSIT.2021.111004","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111004","url":null,"abstract":"This study aims to introduce a discussion platform and curriculum designed to help people understand how machines learn. Research shows how to train an agent through dialogue and understand how information is represented using visualization. This paper starts by providing a comprehensive definition of AI literacy based on existing research and integrates a wide range of different subject documents into a set of key AI literacy skills to develop a user-centered AI. This functionality and structural considerations are organized into a conceptual framework based on the literature. Contributions to this paper can be used to initiate discussion and guide future research on AI learning within the computer science community.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46114348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-07-10DOI: 10.5121/CSIT.2021.111007
Emmanuel Ahishakiye
Languages do not always use specific perception words to refer to specific senses. A word from one sense can metaphorically express another physical perception meaning. For Kirundi, findings from a corpus-based analysis revealed a cross-modal polysemy and a bidirectional hierarchy between higher and lower senses. The attested multisensory expression of auditory verb kwûmva ‘hear’ allows us to reduce sense modalities to two –vision and audition. Moreover, the auditory experience verb kwûmva ‘hear’ shows that lower senses can extend to higher senses through the use of synaesthetic metaphor (e.g. kwûmva akamōto ‘lit:hear a smell’/ururírīmbo ruryōshé ‘lit: a tasty song’/ururirimbo ruhimbâye ‘lit: a pleasant song). However, in collocations involving emotion words, it connects perception to emotion (e.g.; kwûmva inzara ‘lit: hear hunger’, kwûmva umunêzēro ‘lit: hear happiness’). This association indicates that perception in Kirundi gets information from both internal and external stimuli. Thus, considering feelings as part of the perception system.
{"title":"Cross-modal Perception in Kirundi","authors":"Emmanuel Ahishakiye","doi":"10.5121/CSIT.2021.111007","DOIUrl":"https://doi.org/10.5121/CSIT.2021.111007","url":null,"abstract":"Languages do not always use specific perception words to refer to specific senses. A word from one sense can metaphorically express another physical perception meaning. For Kirundi, findings from a corpus-based analysis revealed a cross-modal polysemy and a bidirectional hierarchy between higher and lower senses. The attested multisensory expression of auditory verb kwûmva ‘hear’ allows us to reduce sense modalities to two –vision and audition. Moreover, the auditory experience verb kwûmva ‘hear’ shows that lower senses can extend to higher senses through the use of synaesthetic metaphor (e.g. kwûmva akamōto ‘lit:hear a smell’/ururírīmbo ruryōshé ‘lit: a tasty song’/ururirimbo ruhimbâye ‘lit: a pleasant song). However, in collocations involving emotion words, it connects perception to emotion (e.g.; kwûmva inzara ‘lit: hear hunger’, kwûmva umunêzēro ‘lit: hear happiness’). This association indicates that perception in Kirundi gets information from both internal and external stimuli. Thus, considering feelings as part of the perception system.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44613398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-26DOI: 10.5121/csit.2021.110918
Ying Ma, Yu Sun
Students in international classroom settings face difficulties comprehending and writing down data shared with them, which causes unnecessary frustration and misunderstanding. However, utilizing digital aids to record and store data can alleviate these issues and ensure comprehension by providing other means of studying/reinforcement. This paper presents an application to actively listen and write down notes for students as teachers instruct class. We applied our application to multiple class settings and company meetings, and conducted a qualitative evaluation of the approach.
{"title":"Rational Mobile Application to Detect Language and Compose Annotations: Notespeak App","authors":"Ying Ma, Yu Sun","doi":"10.5121/csit.2021.110918","DOIUrl":"https://doi.org/10.5121/csit.2021.110918","url":null,"abstract":"Students in international classroom settings face difficulties comprehending and writing down data shared with them, which causes unnecessary frustration and misunderstanding. However, utilizing digital aids to record and store data can alleviate these issues and ensure comprehension by providing other means of studying/reinforcement. This paper presents an application to actively listen and write down notes for students as teachers instruct class. We applied our application to multiple class settings and company meetings, and conducted a qualitative evaluation of the approach.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44795600","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-26DOI: 10.5121/csit.2021.110917
Haocheng Han, Yu Sun
Recent coronavirus lockdowns have had a significant impact on how students study. As states shut down schools, millions of students are now required to study at home with pre-recorded videos. This, however, proves challenging, as teachers have no way of knowing whether or not students are paying attention to the videos, and students may be easily distracted from important parts of the videos. Currently, there is virtually no research and development of applications revolving specifically around the subject of effectively taking digital notes from videos. This paper introduces the web application we developed for streamlined, video-focused auto-schematic note-taking. We applied our application to school-related video lectures and conducted a qualitative evaluation of the approach. The results show that the tools increase productivity when taking notes from a video, and are more effective and informational than conventional paper notes.
{"title":"A Video Note Taking System to Make Online Video Learning Easier","authors":"Haocheng Han, Yu Sun","doi":"10.5121/csit.2021.110917","DOIUrl":"https://doi.org/10.5121/csit.2021.110917","url":null,"abstract":"Recent coronavirus lockdowns have had a significant impact on how students study. As states shut down schools, millions of students are now required to study at home with pre-recorded videos. This, however, proves challenging, as teachers have no way of knowing whether or not students are paying attention to the videos, and students may be easily distracted from important parts of the videos. Currently, there is virtually no research and development of applications revolving specifically around the subject of effectively taking digital notes from videos. This paper introduces the web application we developed for streamlined, video-focused auto-schematic note-taking. We applied our application to school-related video lectures and conducted a qualitative evaluation of the approach. The results show that the tools increase productivity when taking notes from a video, and are more effective and informational than conventional paper notes.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47482499","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-26DOI: 10.5121/csit.2021.110902
K. Uehira, H. Unno
A technique for removing unnecessary patterns from captured images by using a generative network is studied. The patterns, composed of lines and spaces, are superimposed onto a blue component image of RGB color image when the image is captured for the purpose of acquiring a depth map. The superimposed patterns become unnecessary after the depth map is acquired. We tried to remove these unnecessary patterns by using a generative adversarial network (GAN) and an auto encoder (AE). The experimental results show that the patterns can be removed by using a GAN and AE to the point of being invisible. They also show that the performance of GAN is much higher than that of AE and that its PSNR and SSIM were over 45 and about 0.99, respectively. From the results, we demonstrate the effectiveness of the technique with a GAN.
{"title":"Technique for Removing Unnecessary Superimposed Patterns from Image using Generative Network","authors":"K. Uehira, H. Unno","doi":"10.5121/csit.2021.110902","DOIUrl":"https://doi.org/10.5121/csit.2021.110902","url":null,"abstract":"A technique for removing unnecessary patterns from captured images by using a generative network is studied. The patterns, composed of lines and spaces, are superimposed onto a blue component image of RGB color image when the image is captured for the purpose of acquiring a depth map. The superimposed patterns become unnecessary after the depth map is acquired. We tried to remove these unnecessary patterns by using a generative adversarial network (GAN) and an auto encoder (AE). The experimental results show that the patterns can be removed by using a GAN and AE to the point of being invisible. They also show that the performance of GAN is much higher than that of AE and that its PSNR and SSIM were over 45 and about 0.99, respectively. From the results, we demonstrate the effectiveness of the technique with a GAN.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46864543","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-26DOI: 10.5121/csit.2021.110916
Peiyuan Sun, Yu Sun
Due to the outbreak of the Covid-19 pandemic, college tours are no longer available, so many students have lost the opportunity to see their dream school’s campus. To solve this problem, we developed a product called “Virtourgo,” a university virtual tour website that uses Google Street View images gathered from a web scraper allowing students to see what college campuses are like even when tours are unavailable during the pandemic. The project consists of 3/4 parts: the web scraper script, the GitHub server, the Google Domains DNS Server, and the HTML files. Some challenges we met include scraping repeated pictures and letting the HTML dropdown menu jump to the correct location. We solved these by implementing Python and Javascript functions that specifically target such challenges. Finally, after experimenting with all the functions of the web scraper and website, we confirmed that it works as expected and can scrape and deliver tours of any university campus or public buildings we want.
{"title":"Web Scraper Utilizes Google Street view Images to Power a University Tour","authors":"Peiyuan Sun, Yu Sun","doi":"10.5121/csit.2021.110916","DOIUrl":"https://doi.org/10.5121/csit.2021.110916","url":null,"abstract":"Due to the outbreak of the Covid-19 pandemic, college tours are no longer available, so many students have lost the opportunity to see their dream school’s campus. To solve this problem, we developed a product called “Virtourgo,” a university virtual tour website that uses Google Street View images gathered from a web scraper allowing students to see what college campuses are like even when tours are unavailable during the pandemic. The project consists of 3/4 parts: the web scraper script, the GitHub server, the Google Domains DNS Server, and the HTML files. Some challenges we met include scraping repeated pictures and letting the HTML dropdown menu jump to the correct location. We solved these by implementing Python and Javascript functions that specifically target such challenges. Finally, after experimenting with all the functions of the web scraper and website, we confirmed that it works as expected and can scrape and deliver tours of any university campus or public buildings we want.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42987368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-26DOI: 10.5121/csit.2021.110907
Wen Liang, Ishmael Rico, Yu Sun
Technological advancement has brought many the convenience that the society used to lack, but unnoticed by many, a population neglected through the age of technology has been the visually impaired population. The visually impaired population has grown through ages with as much desire as everyone else to adventure but lack the confidence and support to do so. Time has transported society to a new phase condensed in big data, but to the visually impaired population, this quick-pace living lifestyle, along with the unpredictable natural disaster and COVID-19 pandemic, has dropped them deeper into a feeling of disconnection from the society. Our application uses the global positioning system to supportthe visually impaired in independent navigation, alerts them in face of natural disasters, and remindsthem to sanitize their devices during the COVID-19 pandemic.
{"title":"An Intelligent System to Enhance Visually-Impaired Navigation and Disaster Assistance using Geo-Based Positioning and Machine Learning","authors":"Wen Liang, Ishmael Rico, Yu Sun","doi":"10.5121/csit.2021.110907","DOIUrl":"https://doi.org/10.5121/csit.2021.110907","url":null,"abstract":"Technological advancement has brought many the convenience that the society used to lack, but unnoticed by many, a population neglected through the age of technology has been the visually impaired population. The visually impaired population has grown through ages with as much desire as everyone else to adventure but lack the confidence and support to do so. Time has transported society to a new phase condensed in big data, but to the visually impaired population, this quick-pace living lifestyle, along with the unpredictable natural disaster and COVID-19 pandemic, has dropped them deeper into a feeling of disconnection from the society. Our application uses the global positioning system to supportthe visually impaired in independent navigation, alerts them in face of natural disasters, and remindsthem to sanitize their devices during the COVID-19 pandemic.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49015758","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-06-26DOI: 10.5121/csit.2021.110903
Cong zhong Wu, Hao Dong, Xuan jie Lin, Han tong Jiang, L. Wang, Xin zhi Liu, Wei kai Shi
It is difficult to segment small objects and the edge of the object because of larger-scale variation, larger intra-class variance of background and foreground-background imbalance in the remote sensing imagery. In convolutional neural networks, high frequency signals may degenerate into completely different ones after downsampling. We define this phenomenon as aliasing. Meanwhile, although dilated convolution can expand the receptive field of feature map, a much more complex background can cause serious alarms. To alleviate the above problems, we propose an attention-based mechanism adaptive filtered segmentation network. Experimental results on the Deepglobe Road Extraction dataset and Inria Aerial Image Labeling dataset showed that our method can effectively improve the segmentation accuracy. The F1 value on the two data sets reached 82.67% and 85.71% respectively.
{"title":"Adaptive Filtering Remote Sensing Image Segmentation Network based on Attention Mechanism","authors":"Cong zhong Wu, Hao Dong, Xuan jie Lin, Han tong Jiang, L. Wang, Xin zhi Liu, Wei kai Shi","doi":"10.5121/csit.2021.110903","DOIUrl":"https://doi.org/10.5121/csit.2021.110903","url":null,"abstract":"It is difficult to segment small objects and the edge of the object because of larger-scale variation, larger intra-class variance of background and foreground-background imbalance in the remote sensing imagery. In convolutional neural networks, high frequency signals may degenerate into completely different ones after downsampling. We define this phenomenon as aliasing. Meanwhile, although dilated convolution can expand the receptive field of feature map, a much more complex background can cause serious alarms. To alleviate the above problems, we propose an attention-based mechanism adaptive filtered segmentation network. Experimental results on the Deepglobe Road Extraction dataset and Inria Aerial Image Labeling dataset showed that our method can effectively improve the segmentation accuracy. The F1 value on the two data sets reached 82.67% and 85.71% respectively.","PeriodicalId":72673,"journal":{"name":"Computer science & information technology","volume":" ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-06-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47227971","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}