Pub Date : 2019-11-01DOI: 10.2991/ijndc.k.191118.001
Sinan Chen, S. Saiki, Masahide Nakamura
To provide affordable context recognition for general households, this paper presents a novel technique that integrate image-based cognitive Application Program Interface (API) and light-weight machine learning. Our key idea is to regard every image as a document by exploiting “tags” derived by the API. We first present a framework that specifies a common workflow of the machine-learning-based home context recognition. We then propose a pragmatic method that implements the framework using the “image-as-a-document” approach.
{"title":"Toward Affordable and Practical Home Context Recognition: -Framework and Implementation with Image-based Cognitive API-","authors":"Sinan Chen, S. Saiki, Masahide Nakamura","doi":"10.2991/ijndc.k.191118.001","DOIUrl":"https://doi.org/10.2991/ijndc.k.191118.001","url":null,"abstract":"To provide affordable context recognition for general households, this paper presents a novel technique that integrate image-based cognitive Application Program Interface (API) and light-weight machine learning. Our key idea is to regard every image as a document by exploiting “tags” derived by the API. We first present a framework that specifies a common workflow of the machine-learning-based home context recognition. We then propose a pragmatic method that implements the framework using the “image-as-a-document” approach.","PeriodicalId":318936,"journal":{"name":"Int. J. Networked Distributed Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132771539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.2991/ijndc.k.190911.001
M. Iwashita
All companies make great efforts to retain customers, using Customer Satisfaction (CS) as an indicator to identify improvements in goods/services. Therefore, it is important to determine how to adequately improve CS. Consumers’ responses to questionnaires are generally used to evaluate CS. A simple method of identifying improvement factors for CS is to select those factors with high dissatisfaction scores at a given point of time. However, these factors change rapidly, especially in the information and communication technology (ICT) field, due to the rapid technological developments, competition, and the business environment. Thus, companies should adequately capture customers’ changes in perception of their ICT services.
{"title":"Transitional Method for Identifying Improvements in Video Distribution Services","authors":"M. Iwashita","doi":"10.2991/ijndc.k.190911.001","DOIUrl":"https://doi.org/10.2991/ijndc.k.190911.001","url":null,"abstract":"All companies make great efforts to retain customers, using Customer Satisfaction (CS) as an indicator to identify improvements in goods/services. Therefore, it is important to determine how to adequately improve CS. Consumers’ responses to questionnaires are generally used to evaluate CS. A simple method of identifying improvement factors for CS is to select those factors with high dissatisfaction scores at a given point of time. However, these factors change rapidly, especially in the information and communication technology (ICT) field, due to the rapid technological developments, competition, and the business environment. Thus, companies should adequately capture customers’ changes in perception of their ICT services.","PeriodicalId":318936,"journal":{"name":"Int. J. Networked Distributed Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127101462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.2991/ijndc.k.190911.002
Y. Kondoh, Masashi Nishimoto, Keiji Nishiyama, Hideyuki Kawabata, T. Hironaka
Search tools for Application Programming Interface ( API ) usage patterns extracted from open source repositories could provide useful information for application developers. Unlike ordinary document retrieval, API member sets obtained by mining are often similar to each other and are mixtures of several unimportant and/or irrelevant elements. Thus, an API member set search tool needs to have the ability to extract an essential part of each API member set and to be equipped with an efficient searching interface. We propose a method to improve the searchability of API member sets by utilizing inclusion graphs among API member sets that are automatically extracted from source code. The proposed method incorporates the frequent pattern mining to obtain inclusion graphs and offers the user a way to search appropriate API member sets smoothly and intuitively by using a GUI. In this paper, we describe the details of our method and the design and implementation of the prototype and discuss the usability of the proposed tool.
{"title":"Efficient Searching for Essential API Member Sets based on Inclusion Relation Extraction","authors":"Y. Kondoh, Masashi Nishimoto, Keiji Nishiyama, Hideyuki Kawabata, T. Hironaka","doi":"10.2991/ijndc.k.190911.002","DOIUrl":"https://doi.org/10.2991/ijndc.k.190911.002","url":null,"abstract":"Search tools for Application Programming Interface ( API ) usage patterns extracted from open source repositories could provide useful information for application developers. Unlike ordinary document retrieval, API member sets obtained by mining are often similar to each other and are mixtures of several unimportant and/or irrelevant elements. Thus, an API member set search tool needs to have the ability to extract an essential part of each API member set and to be equipped with an efficient searching interface. We propose a method to improve the searchability of API member sets by utilizing inclusion graphs among API member sets that are automatically extracted from source code. The proposed method incorporates the frequent pattern mining to obtain inclusion graphs and offers the user a way to search appropriate API member sets smoothly and intuitively by using a GUI. In this paper, we describe the details of our method and the design and implementation of the prototype and discuss the usability of the proposed tool.","PeriodicalId":318936,"journal":{"name":"Int. J. Networked Distributed Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128771524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-09-01DOI: 10.2991/ijndc.k.190917.002
Masashi Nishimoto, Keiji Nishiyama, Hideyuki Kawabata, T. Hironaka
Application software is a complex mixture of functionalities. The complexity is, for one thing, due to the event-driven style of software for mobile and/or Web applications where each functionality constituting the software is implemented by combining descriptions that are scattered all over the source code, i.e., each functionality is not clearly separated in the source code. Such complexity of software structure is a serious obstacle to the smooth and safe modification and maintenance of software.
{"title":"SAIFU: Supporting Program Understanding by Automatic Indexing of Functionalities in Source Code","authors":"Masashi Nishimoto, Keiji Nishiyama, Hideyuki Kawabata, T. Hironaka","doi":"10.2991/ijndc.k.190917.002","DOIUrl":"https://doi.org/10.2991/ijndc.k.190917.002","url":null,"abstract":"Application software is a complex mixture of functionalities. The complexity is, for one thing, due to the event-driven style of software for mobile and/or Web applications where each functionality constituting the software is implemented by combining descriptions that are scattered all over the source code, i.e., each functionality is not clearly separated in the source code. Such complexity of software structure is a serious obstacle to the smooth and safe modification and maintenance of software.","PeriodicalId":318936,"journal":{"name":"Int. J. Networked Distributed Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121993634","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.2991/IJNDC.K.190711.001
M. Aizawa, Y. Sei, Yasuyuki Tahara, R. Orihara, Akihiko Ohsuga
Line drawing colorization is an important process in creating artwork such as animation, illustrations and color manga. Many artists color work manually, a process that requires considerable time and effort. In addition, colorizing requires special skills, experience, and knowledge, and this makes such work difficult for beginners. As a result, automated line drawing colorizing methods have significant market demand. However, it is difficult to paint artistic works. Many automated colorizing methods have been developed, but several problems arise in art colorized using these methods. For example, colors may be different in a region that should be painted the same color as another region, or a mismatch may occur between the input line drawing and the colorizing result due to difficulty in understanding the sketches, the inclusion of undesirable artifacts, and other issues. Anime character’s eyes are drawn in various styles, depending on the artists’ preferences. In some styles, eyes are overly abstract. In addition, in grayscale line drawings, the skin and sclera are both expressed in white in many cases. Therefore, the boundaries cannot always be determined using existing automated colorizing techniques. As a result, sclera are often painted the same color as the skin, and there is a mismatch between these regions in the line drawing and the colorizing results. Facial features are important in artworks that depict people, and excessive ambiguity at the boundary between the eyes and the skin may impair quality. Therefore, it is expected that sclera-region detection should improve the accuracy of automated colorizing of grayscale line drawings of people. This paper focuses on inconsistencies in the sclera region between line drawings and colorizing results; we aim to match the structure of line drawings and colorizing results by detecting the sclera regions in grayscale line drawings of people to improve the accuracy of automated colorizing (Figure 1). In our proposed framework, we perform machine learning using a pair of line drawings and a mask image. The sclera regions are labeled to create semantic segmentation models of the sclera regions. Then, to colorize the line drawing, the semantic segmentation models detect the sclera regions, and we apply these regions to the automated colorizing result. As a result, our framework maintains the correct sclera-region color. When using the semantic segmentation model, it is possible to detect sclera regions without requiring the user to add hints. In this paper, we propose two mask image creation methods: the manual type and the graph cut type. Compared with the manual type, the graph cut type can reduce the mask image creator’s burden.
{"title":"Do You Like Sclera? Sclera-region Detection and Colorization for Anime Character Line Drawings","authors":"M. Aizawa, Y. Sei, Yasuyuki Tahara, R. Orihara, Akihiko Ohsuga","doi":"10.2991/IJNDC.K.190711.001","DOIUrl":"https://doi.org/10.2991/IJNDC.K.190711.001","url":null,"abstract":"Line drawing colorization is an important process in creating artwork such as animation, illustrations and color manga. Many artists color work manually, a process that requires considerable time and effort. In addition, colorizing requires special skills, experience, and knowledge, and this makes such work difficult for beginners. As a result, automated line drawing colorizing methods have significant market demand. However, it is difficult to paint artistic works. Many automated colorizing methods have been developed, but several problems arise in art colorized using these methods. For example, colors may be different in a region that should be painted the same color as another region, or a mismatch may occur between the input line drawing and the colorizing result due to difficulty in understanding the sketches, the inclusion of undesirable artifacts, and other issues. Anime character’s eyes are drawn in various styles, depending on the artists’ preferences. In some styles, eyes are overly abstract. In addition, in grayscale line drawings, the skin and sclera are both expressed in white in many cases. Therefore, the boundaries cannot always be determined using existing automated colorizing techniques. As a result, sclera are often painted the same color as the skin, and there is a mismatch between these regions in the line drawing and the colorizing results. Facial features are important in artworks that depict people, and excessive ambiguity at the boundary between the eyes and the skin may impair quality. Therefore, it is expected that sclera-region detection should improve the accuracy of automated colorizing of grayscale line drawings of people. This paper focuses on inconsistencies in the sclera region between line drawings and colorizing results; we aim to match the structure of line drawings and colorizing results by detecting the sclera regions in grayscale line drawings of people to improve the accuracy of automated colorizing (Figure 1). In our proposed framework, we perform machine learning using a pair of line drawings and a mask image. The sclera regions are labeled to create semantic segmentation models of the sclera regions. Then, to colorize the line drawing, the semantic segmentation models detect the sclera regions, and we apply these regions to the automated colorizing result. As a result, our framework maintains the correct sclera-region color. When using the semantic segmentation model, it is possible to detect sclera regions without requiring the user to add hints. In this paper, we propose two mask image creation methods: the manual type and the graph cut type. Compared with the manual type, the graph cut type can reduce the mask image creator’s burden.","PeriodicalId":318936,"journal":{"name":"Int. J. Networked Distributed Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123753452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.2991/IJNDC.K.190710.003
Wesley Dingman, Aviel Cohen, N. Ferrara, Adam Lynch, P. Jasinski, P. Black, Lin Deng
The blockchain is analogous to a distributed ledger of transactions that is programmed to record the transfer and storage of anything of value [1]. Each computer connected to the network in the system acts as a node, receiving a copy of the blockchain and functioning as an “administrator” on the network, continually verifying data and ensuring security within the platform. The fundamental principle behind this technology is that the distributed network it operates on minimizes the risk of a single vulnerability point characteristic of a centralized database. While seemingly infallible, this technology has still been subject to exploitation by financially motivated attackers. The most famous instance, known as the DAO bug, occurred when an attacker utilized a “re-entrancy” vulnerability within an Ethereum smart contract that succeeded in stealing 60 million US$ [2]. For our research, we have decided to focus our attention on the Ethereum blockchain, presently the second most popular cryptocurrency with a current market valuation of roughly 13 billion US$ [3].
{"title":"Defects and Vulnerabilities in Smart Contracts, a Classification using the NIST Bugs Framework","authors":"Wesley Dingman, Aviel Cohen, N. Ferrara, Adam Lynch, P. Jasinski, P. Black, Lin Deng","doi":"10.2991/IJNDC.K.190710.003","DOIUrl":"https://doi.org/10.2991/IJNDC.K.190710.003","url":null,"abstract":"The blockchain is analogous to a distributed ledger of transactions that is programmed to record the transfer and storage of anything of value [1]. Each computer connected to the network in the system acts as a node, receiving a copy of the blockchain and functioning as an “administrator” on the network, continually verifying data and ensuring security within the platform. The fundamental principle behind this technology is that the distributed network it operates on minimizes the risk of a single vulnerability point characteristic of a centralized database. While seemingly infallible, this technology has still been subject to exploitation by financially motivated attackers. The most famous instance, known as the DAO bug, occurred when an attacker utilized a “re-entrancy” vulnerability within an Ethereum smart contract that succeeded in stealing 60 million US$ [2]. For our research, we have decided to focus our attention on the Ethereum blockchain, presently the second most popular cryptocurrency with a current market valuation of roughly 13 billion US$ [3].","PeriodicalId":318936,"journal":{"name":"Int. J. Networked Distributed Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130637687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-01DOI: 10.2991/IJNDC.K.190710.001
K. Nozaki, T. Hochin, Hiroki Nomiya
Instance-based schema matching is to determine the correspondences between heterogeneous databases by comparing instances. Heterogeneous databases consist of an enormous number of tables containing various attributes, causing the data heterogeneity. In such cases, it is effective to consider semantic information. In this paper, we propose the instance-based schema matching considering attributes’ semantics. We used Word2Vec to match attributes of character strings. The result shows a possibility to detect matching between attributes with high semantic similarity.
{"title":"Semantic Schema Matching for String Attribute with Word Vectors and its Evaluation","authors":"K. Nozaki, T. Hochin, Hiroki Nomiya","doi":"10.2991/IJNDC.K.190710.001","DOIUrl":"https://doi.org/10.2991/IJNDC.K.190710.001","url":null,"abstract":"Instance-based schema matching is to determine the correspondences between heterogeneous databases by comparing instances. Heterogeneous databases consist of an enormous number of tables containing various attributes, causing the data heterogeneity. In such cases, it is effective to consider semantic information. In this paper, we propose the instance-based schema matching considering attributes’ semantics. We used Word2Vec to match attributes of character strings. The result shows a possibility to detect matching between attributes with high semantic similarity.","PeriodicalId":318936,"journal":{"name":"Int. J. Networked Distributed Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121870178","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-01DOI: 10.2991/IJNDC.K.190710.002
T. Fujii, Y. Sei, Yasuyuki Tahara, R. Orihara, Akihiko Ohsuga
Automatic captioning tasks that describe the content of images and moving images in natural language have important applications in areas such as search technology. In addition, captioning can assist with understanding content. Understanding of content can be deepened in a short time by reading captions. Among captioning models that use deep training, the encoder–decoder [1] model has generated considerable results and attracted attention, but many existing studies only consider the consistency of contiguous scenes over short periods. Considering the consistency of video segments as a matter of captioning has high importance. Generating cooking recipe sentences from cooking videos can be considered a captioning problem by treating recipes as captions. In addition, because the cooking video is constituted as a set of fragmentary tasks, a model that considers the consistency of the whole video is considered to be effective.
{"title":"\"Never fry carrots without chopping\" Generating Cooking Recipes from Cooking Videos Using Deep Learning Considering Previous Process","authors":"T. Fujii, Y. Sei, Yasuyuki Tahara, R. Orihara, Akihiko Ohsuga","doi":"10.2991/IJNDC.K.190710.002","DOIUrl":"https://doi.org/10.2991/IJNDC.K.190710.002","url":null,"abstract":"Automatic captioning tasks that describe the content of images and moving images in natural language have important applications in areas such as search technology. In addition, captioning can assist with understanding content. Understanding of content can be deepened in a short time by reading captions. Among captioning models that use deep training, the encoder–decoder [1] model has generated considerable results and attracted attention, but many existing studies only consider the consistency of contiguous scenes over short periods. Considering the consistency of video segments as a matter of captioning has high importance. Generating cooking recipe sentences from cooking videos can be considered a captioning problem by treating recipes as captions. In addition, because the cooking video is constituted as a set of fragmentary tasks, a model that considers the consistency of the whole video is considered to be effective.","PeriodicalId":318936,"journal":{"name":"Int. J. Networked Distributed Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131297911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-01DOI: 10.2991/IJNDC.K.190710.004
T. Hanratty, R. Hammell
Within current military environments, the volume, velocity and variety of available information has exploded as compared with past eras. The increase in data while potentially advantageous, creates challenges for how to process that data in meaningful ways. As a result, it is increasingly likely that military commanders and their staffs have access to more information than they can use in a timely manner. This situation highlights the classic information overload problem. In battlefield situations, the issue of deciding which information is most relevant is critical to mission success. For the military, this problem is further aggravated by the restricted time constraints central to the military decision making cycle.
{"title":"Utilizing Type-2 Fuzzy Sets as an Alternative Approach for Valuing Military Information","authors":"T. Hanratty, R. Hammell","doi":"10.2991/IJNDC.K.190710.004","DOIUrl":"https://doi.org/10.2991/IJNDC.K.190710.004","url":null,"abstract":"Within current military environments, the volume, velocity and variety of available information has exploded as compared with past eras. The increase in data while potentially advantageous, creates challenges for how to process that data in meaningful ways. As a result, it is increasingly likely that military commanders and their staffs have access to more information than they can use in a timely manner. This situation highlights the classic information overload problem. In battlefield situations, the issue of deciding which information is most relevant is critical to mission success. For the military, this problem is further aggravated by the restricted time constraints central to the military decision making cycle.","PeriodicalId":318936,"journal":{"name":"Int. J. Networked Distributed Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129793706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-07-01DOI: 10.2991/IJNDC.K.190702.005
Arti Sharma, M. Salim
Fifth generation (5G) and the future wireless networks will going to endorse innumerous emerging use cases and applications with various performance aspects. Concerning this, ITU-R in September 2015 agreed upon its vision for IMT-2020 and beyond networks [1], and outlined three main 5G usage scenarios: (i) Enhanced Mobile Broadband (eMBB); (ii) Ultra-Reliable and Low-Latency Communication (URLLC); and (iii) Massive Machine Type Communication (mMTC). Therefore, new technologies for 5G has been driven by these specific uses to provide ubiquitous connectivity to all the diverse applications. 5G New Radio (5G-NR) is not just an addendum advancement over 4G Radio Access Technologies (4G-RAT), instead 5G technology is very different. The NRAT for 5G (5G-NRAT) includes combinational influence of various new technologies like, new heterogeneous cellular architecture, all new GHz frequency bands with huge available bandwidths, massive MIMO, millimeter wave communication, mMTC, Internet-ofThings (IoT) etc.
{"title":"Polar Code Appropriateness for Ultra-Reliable and Low-Latency Use Cases of 5G Systems","authors":"Arti Sharma, M. Salim","doi":"10.2991/IJNDC.K.190702.005","DOIUrl":"https://doi.org/10.2991/IJNDC.K.190702.005","url":null,"abstract":"Fifth generation (5G) and the future wireless networks will going to endorse innumerous emerging use cases and applications with various performance aspects. Concerning this, ITU-R in September 2015 agreed upon its vision for IMT-2020 and beyond networks [1], and outlined three main 5G usage scenarios: (i) Enhanced Mobile Broadband (eMBB); (ii) Ultra-Reliable and Low-Latency Communication (URLLC); and (iii) Massive Machine Type Communication (mMTC). Therefore, new technologies for 5G has been driven by these specific uses to provide ubiquitous connectivity to all the diverse applications. 5G New Radio (5G-NR) is not just an addendum advancement over 4G Radio Access Technologies (4G-RAT), instead 5G technology is very different. The NRAT for 5G (5G-NRAT) includes combinational influence of various new technologies like, new heterogeneous cellular architecture, all new GHz frequency bands with huge available bandwidths, massive MIMO, millimeter wave communication, mMTC, Internet-ofThings (IoT) etc.","PeriodicalId":318936,"journal":{"name":"Int. J. Networked Distributed Comput.","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2019-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122443710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}