N. Capuano, P. Foggia, L. Greco, Pierluigi Ritrovato
Understanding the role played by genetic variations in diseases, exploring genomic variants and discovering disease-associated loci are among the most pressing challenges of genomic medicine. A huge and ever-increasing amount of information is available to researchers to address these challenges. Unfortunately, it is stored in fragmented ontologies and databases, which use heterogeneous formats and poorly integrated schemas. To overcome these limitations, we propose a linked data approach, based on the formalism of multilayer networks, able to integrate and harmonize biomedical information from multiple sources into a single dense network covering different aspects on Neuroendocrine Neoplasms (NENs). The proposed integration schema consists of three interconnected layers representing, respectively, information on the disease, on the affected genes, on the related biological processes and molecular functions. An easy-to-use client-server application was also developed to browse and search for information on the model supporting multilayer network analysis.
{"title":"A Semantic Framework Supporting Multilayer Networks Analysis for Rare Diseases","authors":"N. Capuano, P. Foggia, L. Greco, Pierluigi Ritrovato","doi":"10.4018/ijswis.297141","DOIUrl":"https://doi.org/10.4018/ijswis.297141","url":null,"abstract":"Understanding the role played by genetic variations in diseases, exploring genomic variants and discovering disease-associated loci are among the most pressing challenges of genomic medicine. A huge and ever-increasing amount of information is available to researchers to address these challenges. Unfortunately, it is stored in fragmented ontologies and databases, which use heterogeneous formats and poorly integrated schemas. To overcome these limitations, we propose a linked data approach, based on the formalism of multilayer networks, able to integrate and harmonize biomedical information from multiple sources into a single dense network covering different aspects on Neuroendocrine Neoplasms (NENs). The proposed integration schema consists of three interconnected layers representing, respectively, information on the disease, on the affected genes, on the related biological processes and molecular functions. An easy-to-use client-server application was also developed to browse and search for information on the model supporting multilayer network analysis.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"30 1","pages":"1-22"},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78638691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Armando Barbosa, I. Bittencourt, S. Siqueira, Diego Dermeval, Nicholas J. T. Cruz
Linking data by finding matching instances in different datasets requires considering many characteristics, such as structural heterogeneity, implicit knowledge, and URI (Uniform Resource Identifier)-oriented identification. The authors propose a context-independent approach to align Linked data through an alignment process based on the ontological model’s components and considering data’s multidimensionality. The researchers experimented with the proposed approach against two methods for aligning linked data in two datasets and evaluated precision, recall, and f-measure metrics. The authors also conducted a case study in a real scenario considering a Brazilian publication dataset on computers and education. This study’s results indicate that the proposed approach overcomes the other methods (regarding the precision, recall, and f-measure metrics), requiring less work when changing the dataset domain. This work’s main contributions include enabling real datasets to be semi-automatically linked, presenting an approach capable of calculating resource similarity.
{"title":"A Context-Independent Ontological Linked Data Alignment Approach to Instance Matching","authors":"Armando Barbosa, I. Bittencourt, S. Siqueira, Diego Dermeval, Nicholas J. T. Cruz","doi":"10.4018/ijswis.295977","DOIUrl":"https://doi.org/10.4018/ijswis.295977","url":null,"abstract":"Linking data by finding matching instances in different datasets requires considering many characteristics, such as structural heterogeneity, implicit knowledge, and URI (Uniform Resource Identifier)-oriented identification. The authors propose a context-independent approach to align Linked data through an alignment process based on the ontological model’s components and considering data’s multidimensionality. The researchers experimented with the proposed approach against two methods for aligning linked data in two datasets and evaluated precision, recall, and f-measure metrics. The authors also conducted a case study in a real scenario considering a Brazilian publication dataset on computers and education. This study’s results indicate that the proposed approach overcomes the other methods (regarding the precision, recall, and f-measure metrics), requiring less work when changing the dataset domain. This work’s main contributions include enabling real datasets to be semi-automatically linked, presenting an approach capable of calculating resource similarity.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"16 1","pages":"1-29"},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87770707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The current XAI techniques present explanations mainly as visuals and structured data. However, these explanations are difficult to be interpreted by a non-expert user. Here, the use of Natural Language Generation (NLG) based techniques can help to represent explanations in human-understandable format. The paper addresses the issue of automatic generation of narratives using a modified transformer approach. Further, due to unavailability of a relevant annotated dataset for development and testing, we also propose a verbalization template approach to generate the same. The input of the transformer is linearized to convert the data-to-text task into text-to-text task. The proposed work is evaluated on a verbalized explained PIMA Indians diabetes dataset and exhibits significant improvement as compared to existing baselines for both, manual and automatic evaluation. Also, the narratives provide better comprehensibility to be trusted by human evaluators than the non-NLG counterparts. Lastly, an ablation study is performed in order to understand the contribution of each component.
{"title":"Modified Transformer Architecture to Explain Black Box Models in Narrative Form","authors":"Diksha Malhotra, P. Saini, Awadhesh Kumar Singh","doi":"10.4018/ijswis.297040","DOIUrl":"https://doi.org/10.4018/ijswis.297040","url":null,"abstract":"The current XAI techniques present explanations mainly as visuals and structured data. However, these explanations are difficult to be interpreted by a non-expert user. Here, the use of Natural Language Generation (NLG) based techniques can help to represent explanations in human-understandable format. The paper addresses the issue of automatic generation of narratives using a modified transformer approach. Further, due to unavailability of a relevant annotated dataset for development and testing, we also propose a verbalization template approach to generate the same. The input of the transformer is linearized to convert the data-to-text task into text-to-text task. The proposed work is evaluated on a verbalized explained PIMA Indians diabetes dataset and exhibits significant improvement as compared to existing baselines for both, manual and automatic evaluation. Also, the narratives provide better comprehensibility to be trusted by human evaluators than the non-NLG counterparts. Lastly, an ablation study is performed in order to understand the contribution of each component.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"2 1","pages":"1-19"},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89938541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cecilia Ávila-Garzón, M. Balaguera, Valentina Tabares-Morales
The development of citizenship competences plays an important role in a complex system like society. Thus, to analyze how such competences impact other contexts is a great challenge because this kind of study involves the work with people and the use of variables that depend on human behaviors. In this sense, many studies have highlighted the advantage of using simulation systems and tools. In particular, the agent-based social simulation field relies upon the Semantic Web to manage knowledge representation in social scenarios. This study focuses on how citizenship competences impact conflict resolution. Moreover, a simulation model in which citizens interact to resolve conflicts by considering citizenship competences and conflict resolution styles is also introduced. It was developed in NetLogo together with an extension that connects it with the ontology of competences. Results show that the higher interactions of citizens-conflicts, the higher level of citizenship competences, and the number of conflicts solved is higher when using citizenship competences.
{"title":"An Agent-Based Social Simulation for Citizenship Competences and Conflict Resolution Styles","authors":"Cecilia Ávila-Garzón, M. Balaguera, Valentina Tabares-Morales","doi":"10.4018/ijswis.306749","DOIUrl":"https://doi.org/10.4018/ijswis.306749","url":null,"abstract":"The development of citizenship competences plays an important role in a complex system like society. Thus, to analyze how such competences impact other contexts is a great challenge because this kind of study involves the work with people and the use of variables that depend on human behaviors. In this sense, many studies have highlighted the advantage of using simulation systems and tools. In particular, the agent-based social simulation field relies upon the Semantic Web to manage knowledge representation in social scenarios. This study focuses on how citizenship competences impact conflict resolution. Moreover, a simulation model in which citizens interact to resolve conflicts by considering citizenship competences and conflict resolution styles is also introduced. It was developed in NetLogo together with an extension that connects it with the ontology of competences. Results show that the higher interactions of citizens-conflicts, the higher level of citizenship competences, and the number of conflicts solved is higher when using citizenship competences.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"81 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85790000","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Samara M. Ahmed, Adil E. Rajput, A. Sarirete, Tauseef J. Chowdhry
Social media gives researchers an invaluable opportunity to gain insight into different facets of human life. Researchers put a great emphasis on categorizing the socioeconomic status (SES) of individuals to help predict various findings of interest. Forum uses, hashtags and chatrooms are common tools of conversations grouping. Crowdsourcing involves gathering intelligence to group online user community based on common interest. This paper provides a mechanism to look at writings on social media and group them based on their academic background. We analyzed online forum posts from various geographical regions in the US and characterized the readability scores of users. Specifically, we collected 10,000 tweets from the members of US Senate and computed the Flesch-Kincaid readability score. Comparing the Senators’ tweets to the ones from average internet users, we note 1) US Senators’ readability based on their tweets rate is much higher, and 2) immense difference among average citizen’s score compared to those of US Senators is attributed to the wide spectrum of academic attainment.
{"title":"Flesch-Kincaid Measure as Proxy of Socio-Economic Status on Twitter: Comparing US Senator Writing to Internet Users","authors":"Samara M. Ahmed, Adil E. Rajput, A. Sarirete, Tauseef J. Chowdhry","doi":"10.4018/ijswis.297037","DOIUrl":"https://doi.org/10.4018/ijswis.297037","url":null,"abstract":"Social media gives researchers an invaluable opportunity to gain insight into different facets of human life. Researchers put a great emphasis on categorizing the socioeconomic status (SES) of individuals to help predict various findings of interest. Forum uses, hashtags and chatrooms are common tools of conversations grouping. Crowdsourcing involves gathering intelligence to group online user community based on common interest. This paper provides a mechanism to look at writings on social media and group them based on their academic background. We analyzed online forum posts from various geographical regions in the US and characterized the readability scores of users. Specifically, we collected 10,000 tweets from the members of US Senate and computed the Flesch-Kincaid readability score. Comparing the Senators’ tweets to the ones from average internet users, we note 1) US Senators’ readability based on their tweets rate is much higher, and 2) immense difference among average citizen’s score compared to those of US Senators is attributed to the wide spectrum of academic attainment.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"28 1","pages":"1-19"},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82621972","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The demand for Internet security has escalated in the last two decades because the rapid proliferation in the number of Internet users has presented attackers with new detrimental opportunities. One of the simple yet powerful attack, lurking around the Internet today, is the Distributed Denial-of-Service (DDoS) attack. The expeditious surge in the collaborative environments, like IoT, cloud computing and SDN, have provided attackers with countless new avenues to benefit from the distributed nature of DDoS attacks. The attackers protect their anonymity by infecting distributed devices and utilizing them to create a bot army to constitute a large-scale attack. Thus, the development of an effective as well as efficient DDoS defense mechanism becomes an immediate goal. In this exposition, we present a DDoS threat analysis along with a few novel ground-breaking defense mechanisms proposed by various researchers for numerous domains. Further, we talk about popular performance metrics that evaluate the defense schemes. In the end, we list prevalent DDoS attack tools and open challenges.
{"title":"Distributed Denial-of-Service (DDoS) Attacks and Defense Mechanisms in Various Web-Enabled Computing Platforms: Issues, Challenges, and Future Research Directions","authors":"Anshuman Singh, Brij B. Gupta","doi":"10.4018/ijswis.297143","DOIUrl":"https://doi.org/10.4018/ijswis.297143","url":null,"abstract":"The demand for Internet security has escalated in the last two decades because the rapid proliferation in the number of Internet users has presented attackers with new detrimental opportunities. One of the simple yet powerful attack, lurking around the Internet today, is the Distributed Denial-of-Service (DDoS) attack. The expeditious surge in the collaborative environments, like IoT, cloud computing and SDN, have provided attackers with countless new avenues to benefit from the distributed nature of DDoS attacks. The attackers protect their anonymity by infecting distributed devices and utilizing them to create a bot army to constitute a large-scale attack. Thus, the development of an effective as well as efficient DDoS defense mechanism becomes an immediate goal. In this exposition, we present a DDoS threat analysis along with a few novel ground-breaking defense mechanisms proposed by various researchers for numerous domains. Further, we talk about popular performance metrics that evaluate the defense schemes. In the end, we list prevalent DDoS attack tools and open challenges.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"65 1","pages":"1-43"},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82788365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Health information becomes importantly valuable for protecting public health in the current coronavirus situation. Knowledge-based information systems can play a crucial role in helping individuals to practice risk assessment and remote diagnosis. We introduce a novel approach that will develop causality-focused knowledge learning in a robust and transparent manner. Then, the machine gains the causality and probability knowledge for inference (thinking) and accurate prediction later. Besides, the hidden knowledge can be discovered beyond the existing understanding of the diseases. The whole approach is built on a Causal Probability Description Logic Framework that combines Natural Language Processing (NLP), Causality Analysis and extended Knowledge Graph (KG) technologies together. The experimental work has processed 801 diseases in total (from the UK NHS website linking with DBpedia datasets). As a result, the machine learnt comprehensive health causal knowledge and relations among the diseases, symptoms, and other facts efficiently.
{"title":"Learning Disease Causality Knowledge From the Web of Health Data","authors":"H. Q. Yu, S. Reiff-Marganiec","doi":"10.4018/ijswis.297145","DOIUrl":"https://doi.org/10.4018/ijswis.297145","url":null,"abstract":"Health information becomes importantly valuable for protecting public health in the current coronavirus situation. Knowledge-based information systems can play a crucial role in helping individuals to practice risk assessment and remote diagnosis. We introduce a novel approach that will develop causality-focused knowledge learning in a robust and transparent manner. Then, the machine gains the causality and probability knowledge for inference (thinking) and accurate prediction later. Besides, the hidden knowledge can be discovered beyond the existing understanding of the diseases. The whole approach is built on a Causal Probability Description Logic Framework that combines Natural Language Processing (NLP), Causality Analysis and extended Knowledge Graph (KG) technologies together. The experimental work has processed 801 diseases in total (from the UK NHS website linking with DBpedia datasets). As a result, the machine learnt comprehensive health causal knowledge and relations among the diseases, symptoms, and other facts efficiently.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"18 1","pages":"1-19"},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84977468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Shimaa Ismail, Tarek El-Shishtawy, Abdelwahab K. Alsammak
This work presents a new alignment word-space approach for measuring the similarity between two snipped texts. The approach combines two similarity measurement methods: alignment-based and vector space-based. The vector space-based method depends on a semantic net that represents the meaning of words as vectors. These vectors are lemmatized to enrich the search space. The alignment-based method generates an alignment word space matrix (AWSM) for the snipped texts according to the generated semantic word spaces. Finally, the degree of sentence semantic similarity is measured using some proposed alignment rules. Four experiments were carried out to evaluate the performance of the proposed approach, using two different datasets. The experimental results proved that applying the lemmatization process for the input text and the vector model has a better effect. The degree of correctness of the results reaches 0.7212 which is considered one of the best two results of the published Arabic semantic similarities.
{"title":"A New Alignment Word-Space Approach for Measuring Semantic Similarity for Arabic Text","authors":"Shimaa Ismail, Tarek El-Shishtawy, Abdelwahab K. Alsammak","doi":"10.4018/ijswis.297036","DOIUrl":"https://doi.org/10.4018/ijswis.297036","url":null,"abstract":"This work presents a new alignment word-space approach for measuring the similarity between two snipped texts. The approach combines two similarity measurement methods: alignment-based and vector space-based. The vector space-based method depends on a semantic net that represents the meaning of words as vectors. These vectors are lemmatized to enrich the search space. The alignment-based method generates an alignment word space matrix (AWSM) for the snipped texts according to the generated semantic word spaces. Finally, the degree of sentence semantic similarity is measured using some proposed alignment rules. Four experiments were carried out to evaluate the performance of the proposed approach, using two different datasets. The experimental results proved that applying the lemmatization process for the input text and the vector model has a better effect. The degree of correctness of the results reaches 0.7212 which is considered one of the best two results of the published Arabic semantic similarities.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"3 1","pages":"1-18"},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88087232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the progress of globalization, the transnational human traffic is spreading globally. It damages national economy and social order as well as infringes on the basic human rights of the victims, which has aroused general concern all over the world, becoming global issues. One of the important features in human being traffic is the factor of globalization. A destination-source model works as a deterrent which is applied in the identification of smuggling and trafficking of illegal immigrants. The related results show that the employer penalty and market wage will influence the amount of smuggling and trafficking immigrants. Tax offered by legal unskilled workers at destination countries provides financial support for the inland monitoring of illegal immigrants. The improved SVM (supported vector machine) is proposed to study online textual data used for advertisement classification, with the purpose of discerning underlying human trafficking patterns on the network and recognizing suspicious advertisements, a concern of law-enforcement agencies.
{"title":"A Study on Human Transiting Based on Big Data and Web Semantics","authors":"Qiang Zhou","doi":"10.4018/ijswis.310055","DOIUrl":"https://doi.org/10.4018/ijswis.310055","url":null,"abstract":"In the progress of globalization, the transnational human traffic is spreading globally. It damages national economy and social order as well as infringes on the basic human rights of the victims, which has aroused general concern all over the world, becoming global issues. One of the important features in human being traffic is the factor of globalization. A destination-source model works as a deterrent which is applied in the identification of smuggling and trafficking of illegal immigrants. The related results show that the employer penalty and market wage will influence the amount of smuggling and trafficking immigrants. Tax offered by legal unskilled workers at destination countries provides financial support for the inland monitoring of illegal immigrants. The improved SVM (supported vector machine) is proposed to study online textual data used for advertisement classification, with the purpose of discerning underlying human trafficking patterns on the network and recognizing suspicious advertisements, a concern of law-enforcement agencies.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"2017 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73931250","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper the spintronic-based memory MRAM is presented that showed how it can replace both SRAM and DRAM and provide the high speed with great chip size. Moreover, MRAM is the nonvolatile memory that provides great advancement in the storage process. The different types of MRAM are mentioned with the techniques used for writing purpose and also mention which one is more used and why. The basic working principle and the function performed by the MRAM are discussed. Artificial intelligence (AI) is mentioned with its pros and cons for intelligent systems. Neuromorphic computing is also explained along with its important role in intelligent systems. Some reasons are also discussed as to why neuromorphic computing is so important. This paper also presents how spintronic-based devices especially memory can be used in intelligent systems and neuromorphic computing. Nanoscale spintronic-based MRAM plays a key role in intelligent systems and neuromorphic computing applications.
{"title":"Spintronics Based Non-Volatile MRAM for Intelligent Systems","authors":"Parul Sharma, B. Raj, S. S. Gill","doi":"10.4018/ijswis.310056","DOIUrl":"https://doi.org/10.4018/ijswis.310056","url":null,"abstract":"In this paper the spintronic-based memory MRAM is presented that showed how it can replace both SRAM and DRAM and provide the high speed with great chip size. Moreover, MRAM is the nonvolatile memory that provides great advancement in the storage process. The different types of MRAM are mentioned with the techniques used for writing purpose and also mention which one is more used and why. The basic working principle and the function performed by the MRAM are discussed. Artificial intelligence (AI) is mentioned with its pros and cons for intelligent systems. Neuromorphic computing is also explained along with its important role in intelligent systems. Some reasons are also discussed as to why neuromorphic computing is so important. This paper also presents how spintronic-based devices especially memory can be used in intelligent systems and neuromorphic computing. Nanoscale spintronic-based MRAM plays a key role in intelligent systems and neuromorphic computing applications.","PeriodicalId":54934,"journal":{"name":"International Journal on Semantic Web and Information Systems","volume":"12 1","pages":""},"PeriodicalIF":3.2,"publicationDate":"2022-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82813706","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}