Pub Date : 2023-08-10DOI: 10.1177/01655515231191177
D. Marikyan, S. Papagiannidis, G. Stewart
Rapid digitalisation has resulted in a literature about technology acceptance that is ever increasing in size, naturally creating debates about the developments in the field and their implications. Given the size of the literature and the range of factors, theories and applications considered, this article reviewed the relevant literature using a meta-analytical approach. The objective of this review was twofold: (a) to provide a comprehensive analysis of the factors contributing to technology acceptance and investigate their effects, depending on theoretical underpinnings, and (b) to explore the conditions explaining the variance in the effects of predictors time-, application- and journal-wise. This review analysed data from 693 papers. A total of 21 independent predictors having differential effects on attitude, intention and use behaviour were found. The effects of the predictors were different depending on the theoretical frameworks they were related to. The analysis of the consistency of the role of the predictors suggested that there was no longitudinal change in their effect sizes. However, a significant variance was found when comparing predictors across research applications and the journals in which the papers were published. The analysis of publication bias demonstrated a tendency to publish studies with significant results, although no evidence was found of p-value manipulation.
{"title":"Technology acceptance research: Meta-analysis","authors":"D. Marikyan, S. Papagiannidis, G. Stewart","doi":"10.1177/01655515231191177","DOIUrl":"https://doi.org/10.1177/01655515231191177","url":null,"abstract":"Rapid digitalisation has resulted in a literature about technology acceptance that is ever increasing in size, naturally creating debates about the developments in the field and their implications. Given the size of the literature and the range of factors, theories and applications considered, this article reviewed the relevant literature using a meta-analytical approach. The objective of this review was twofold: (a) to provide a comprehensive analysis of the factors contributing to technology acceptance and investigate their effects, depending on theoretical underpinnings, and (b) to explore the conditions explaining the variance in the effects of predictors time-, application- and journal-wise. This review analysed data from 693 papers. A total of 21 independent predictors having differential effects on attitude, intention and use behaviour were found. The effects of the predictors were different depending on the theoretical frameworks they were related to. The analysis of the consistency of the role of the predictors suggested that there was no longitudinal change in their effect sizes. However, a significant variance was found when comparing predictors across research applications and the journals in which the papers were published. The analysis of publication bias demonstrated a tendency to publish studies with significant results, although no evidence was found of p-value manipulation.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44804980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-09DOI: 10.1177/01655515231189660
Zhichao Ba, Yao Tang, Xuetai Liu, Yikun Xia
Citation-based main path analysis (MPA) has been widely applied to identify developmental trajectories of science and technology, while rarely used to detect paths of policy diffusion. Compared with scientific publications and patents, policy documents show some distinct characteristics, such as citation relationships with different legal validity, which could be considered to improve the policy citation analysis. To this end, this study formally constructs a policy citation network based on a plethora of citing/cited links embedded in the textual content of policy documents and proposes a preference-adjusted main path analysis (PMPA) approach to track historical routes of policy diffusion. PMPA incorporates two kinds of policy citation preferences, including validity bias and time bias. An evidence analysis from China’s new energy policies (NEPs) is implemented to show the efficacy of the proposed approach. The results unveil that the preference-adjusted main path approach can capture more important policies and more informative main paths of policy diffusion than the original MPA. Moreover, our research can yield in-depth insight into the evolutionary process of policy diffusion and provide guidance for policy-makers and industry decision-makers to formulate practical policy-making.
{"title":"Tracing policy diffusion: Identifying main paths in policy citation networks","authors":"Zhichao Ba, Yao Tang, Xuetai Liu, Yikun Xia","doi":"10.1177/01655515231189660","DOIUrl":"https://doi.org/10.1177/01655515231189660","url":null,"abstract":"Citation-based main path analysis (MPA) has been widely applied to identify developmental trajectories of science and technology, while rarely used to detect paths of policy diffusion. Compared with scientific publications and patents, policy documents show some distinct characteristics, such as citation relationships with different legal validity, which could be considered to improve the policy citation analysis. To this end, this study formally constructs a policy citation network based on a plethora of citing/cited links embedded in the textual content of policy documents and proposes a preference-adjusted main path analysis (PMPA) approach to track historical routes of policy diffusion. PMPA incorporates two kinds of policy citation preferences, including validity bias and time bias. An evidence analysis from China’s new energy policies (NEPs) is implemented to show the efficacy of the proposed approach. The results unveil that the preference-adjusted main path approach can capture more important policies and more informative main paths of policy diffusion than the original MPA. Moreover, our research can yield in-depth insight into the evolutionary process of policy diffusion and provide guidance for policy-makers and industry decision-makers to formulate practical policy-making.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48662179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-02DOI: 10.1177/01655515231188344
Behnam Karami, F. Bakouie, S. Gharibzadeh
Moral expressions in online communications can have a serious impact on framing discussions and subsequent online behaviours. Despite research on extracting moral sentiment from English text, other low-resource languages, such as Persian, lack enough resources and research about this important topic. We address this issue using the Moral Foundation theory (MFT) as the theoretical moral psychology paradigm. We developed a Twitter data set of 8000 tweets that are manually annotated for moral foundations and also we established a baseline for computing moral sentiment from Persian text. We evaluate a plethora of state-of-the-art machine learning models, both rule-based and neural, including distributed dictionary representation (DDR), long short-term memory (LSTM) and bidirectional encoder representations from transformer (BERT). Our findings show that among different models, fine-tuning a pre-trained Persian BERT language model with a linear network as the classifier yields the best results. Furthermore, we analysed this model to find out which layer of the model contributes most to this superior accuracy. We also proposed an alternative transformer-based model that yields competitive results to the BERT model despite its lower size and faster inference time. The proposed model can be used as a tool for analysing moral sentiment and framing in Persian texts for downstream social and psychological studies. We also hope our work provides some resources for further enhancing the methods for computing moral sentiment in Persian text.
{"title":"A transformer-based deep learning model for Persian moral sentiment analysis","authors":"Behnam Karami, F. Bakouie, S. Gharibzadeh","doi":"10.1177/01655515231188344","DOIUrl":"https://doi.org/10.1177/01655515231188344","url":null,"abstract":"Moral expressions in online communications can have a serious impact on framing discussions and subsequent online behaviours. Despite research on extracting moral sentiment from English text, other low-resource languages, such as Persian, lack enough resources and research about this important topic. We address this issue using the Moral Foundation theory (MFT) as the theoretical moral psychology paradigm. We developed a Twitter data set of 8000 tweets that are manually annotated for moral foundations and also we established a baseline for computing moral sentiment from Persian text. We evaluate a plethora of state-of-the-art machine learning models, both rule-based and neural, including distributed dictionary representation (DDR), long short-term memory (LSTM) and bidirectional encoder representations from transformer (BERT). Our findings show that among different models, fine-tuning a pre-trained Persian BERT language model with a linear network as the classifier yields the best results. Furthermore, we analysed this model to find out which layer of the model contributes most to this superior accuracy. We also proposed an alternative transformer-based model that yields competitive results to the BERT model despite its lower size and faster inference time. The proposed model can be used as a tool for analysing moral sentiment and framing in Persian texts for downstream social and psychological studies. We also hope our work provides some resources for further enhancing the methods for computing moral sentiment in Persian text.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47505183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-08-01DOI: 10.1177/01655515211040653
Reijo Savolainen
This study examines how the credibility of the content of mis- or disinformation, as well as the believability of authors creating such information is assessed in online discussion. More specifically, the investigation was focused on the credibility of mis- or disinformation about COVID-19 vaccines. To this end, a sample of 1887 messages posted to a Reddit discussion group was scrutinised by means of qualitative content analysis. The findings indicate that in the assessment of the author's credibility, the most important criteria are his or her reputation, expertise and honesty in argumentation. In the judgement of the credibility of the content of mis/disinformation, objectivity of information and plausibility of arguments are highly important. The findings highlight that in the assessment of the credibility of mis/disinformation, the author's qualities such as poor reputation, incompetency and dishonesty are particularly significant because they trigger expectancies about how the information content created by the author is judged.
{"title":"Assessing the credibility of COVID-19 vaccine mis/disinformation in online discussion.","authors":"Reijo Savolainen","doi":"10.1177/01655515211040653","DOIUrl":"https://doi.org/10.1177/01655515211040653","url":null,"abstract":"<p><p>This study examines how the credibility of the content of mis- or disinformation, as well as the believability of authors creating such information is assessed in online discussion. More specifically, the investigation was focused on the credibility of mis- or disinformation about COVID-19 vaccines. To this end, a sample of 1887 messages posted to a Reddit discussion group was scrutinised by means of qualitative content analysis. The findings indicate that in the assessment of the author's credibility, the most important criteria are his or her reputation, expertise and honesty in argumentation. In the judgement of the credibility of the content of mis/disinformation, objectivity of information and plausibility of arguments are highly important. The findings highlight that in the assessment of the credibility of mis/disinformation, the author's qualities such as poor reputation, incompetency and dishonesty are particularly significant because they trigger expectancies about how the information content created by the author is judged.</p>","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":"49 4","pages":"1096-1110"},"PeriodicalIF":2.4,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10345821/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10189535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-22DOI: 10.1177/01655515231184831
Pankaj Singh, Plaban Kumar Bhowmick
In this article, a pseudo-relevance feedback (PRF)–based framework is presented for effective query expansion (QE). As candidate expansion terms, the proposed PRF framework considers the terms that are different morphological variants of the original query terms and are semantically close to them. This strategy of selecting expansion terms is expected to preserve the query intent after expansion. While judging the suitability of an expansion term with respect to a base query, two aspects of relation of the term with the query are considered. The first aspect probes to what extent the candidate term is semantically linked to the original query and the second one checks the extent to which the candidate term can supplement the base query terms. The semantic relationship between a query and expansion terms is modelled using bidirectional encoder representations from transformers (BERT). The degree of similarity is used to estimate the relative importance of the expansion terms with respect to the query. The quantified relative importance is used to assign weights of the expansion terms in the final query. Finally, the expansion terms are grouped into semantic clusters to strengthen the original query intent. A set of experiments was performed on three different Text REtrieval Conference (TREC) collections to experimentally validate the effectiveness of the proposed QE algorithm. The results show that the proposed QE approach yields competitive retrieval effectiveness over the existing state-of-the-art PRF methods in terms of the mean average precision (MAP) and precision P at position 10 (P@10).
{"title":"Semantics-aware query expansion using pseudo-relevance feedback","authors":"Pankaj Singh, Plaban Kumar Bhowmick","doi":"10.1177/01655515231184831","DOIUrl":"https://doi.org/10.1177/01655515231184831","url":null,"abstract":"In this article, a pseudo-relevance feedback (PRF)–based framework is presented for effective query expansion (QE). As candidate expansion terms, the proposed PRF framework considers the terms that are different morphological variants of the original query terms and are semantically close to them. This strategy of selecting expansion terms is expected to preserve the query intent after expansion. While judging the suitability of an expansion term with respect to a base query, two aspects of relation of the term with the query are considered. The first aspect probes to what extent the candidate term is semantically linked to the original query and the second one checks the extent to which the candidate term can supplement the base query terms. The semantic relationship between a query and expansion terms is modelled using bidirectional encoder representations from transformers (BERT). The degree of similarity is used to estimate the relative importance of the expansion terms with respect to the query. The quantified relative importance is used to assign weights of the expansion terms in the final query. Finally, the expansion terms are grouped into semantic clusters to strengthen the original query intent. A set of experiments was performed on three different Text REtrieval Conference (TREC) collections to experimentally validate the effectiveness of the proposed QE algorithm. The results show that the proposed QE approach yields competitive retrieval effectiveness over the existing state-of-the-art PRF methods in terms of the mean average precision (MAP) and precision P at position 10 (P@10).","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42263993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-20DOI: 10.1177/01655515231184826
Amarnath Pathak, Partha Pakray
The article presents an approach to recognise formula entailment, which concerns finding entailment relationships between pairs of math formulae. As the current formula-similarity-detection approaches fail to account for broader relationships between pairs of math formulae, recognising formula entailment becomes paramount. To this end, a long short-term memory (LSTM) neural network using symbol-by-symbol attention for recognising formula entailment is implemented. However, owing to the unavailability of relevant training and validation corpora, the first and foremost step is to create a sufficiently large-sized symbol-level MATHENTAIL data set in an automated fashion. Depending on the extent of similarity between the corresponding symbol embeddings, the symbol pairs in the MATHENTAIL data set are assigned ‘entailment’ or ‘neutral’ labels. An improved symbol-to-vector (isymbol2vec) method generates mathematical symbols (in LATEX) and their embeddings using the Wikipedia corpus of scientific documents and Continuous Bag of Words (CBOW) architecture. Eventually, the LSTM network, trained and validated using the MATHENTAIL data set, predicts formulae entailment for test formulae pairs with a reasonable accuracy of 62.2%.
{"title":"Recognising formula entailment using long short-term memory network","authors":"Amarnath Pathak, Partha Pakray","doi":"10.1177/01655515231184826","DOIUrl":"https://doi.org/10.1177/01655515231184826","url":null,"abstract":"The article presents an approach to recognise formula entailment, which concerns finding entailment relationships between pairs of math formulae. As the current formula-similarity-detection approaches fail to account for broader relationships between pairs of math formulae, recognising formula entailment becomes paramount. To this end, a long short-term memory (LSTM) neural network using symbol-by-symbol attention for recognising formula entailment is implemented. However, owing to the unavailability of relevant training and validation corpora, the first and foremost step is to create a sufficiently large-sized symbol-level MATHENTAIL data set in an automated fashion. Depending on the extent of similarity between the corresponding symbol embeddings, the symbol pairs in the MATHENTAIL data set are assigned ‘entailment’ or ‘neutral’ labels. An improved symbol-to-vector (isymbol2vec) method generates mathematical symbols (in LATEX) and their embeddings using the Wikipedia corpus of scientific documents and Continuous Bag of Words (CBOW) architecture. Eventually, the LSTM network, trained and validated using the MATHENTAIL data set, predicts formulae entailment for test formulae pairs with a reasonable accuracy of 62.2%.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48101463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-20DOI: 10.1177/01655515231188338
M. Asim, Muhammad Arif
This study aims to synthesise the findings of research on Internet of Things (IoTs) adoption and use in libraries. This systematic literature review is based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method and comprises publications in the five world-renowned databases. The libraries adopted IoTs for saving time, enhance performance and efficiency, improve the quality of services, and ease in collection accessibility. This study identified various IoTs-based practices including auto-notification of circulation tasks, inventory management, tracing users’ data from virtual/physical card, user tracking and self-guided virtual tour of library. To adopt and use IoTs, libraries faced several challenges such as security and privacy, cost, lack of standards and policy, require highly integrated environment, and lack of management interest. The critical IoTs adoption and usage factors as well as various challenges identified would provide valuable insights to library professionals to design state-of-the-art smart technologies drive services.
{"title":"Internet of things adoption and use in academic libraries: A review and directions for future research","authors":"M. Asim, Muhammad Arif","doi":"10.1177/01655515231188338","DOIUrl":"https://doi.org/10.1177/01655515231188338","url":null,"abstract":"This study aims to synthesise the findings of research on Internet of Things (IoTs) adoption and use in libraries. This systematic literature review is based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method and comprises publications in the five world-renowned databases. The libraries adopted IoTs for saving time, enhance performance and efficiency, improve the quality of services, and ease in collection accessibility. This study identified various IoTs-based practices including auto-notification of circulation tasks, inventory management, tracing users’ data from virtual/physical card, user tracking and self-guided virtual tour of library. To adopt and use IoTs, libraries faced several challenges such as security and privacy, cost, lack of standards and policy, require highly integrated environment, and lack of management interest. The critical IoTs adoption and usage factors as well as various challenges identified would provide valuable insights to library professionals to design state-of-the-art smart technologies drive services.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44101101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-14DOI: 10.1177/01655515231171362
Billie Anderson, M. Bani-Yaghoub, Vagmi Kantheti, Scott Curtis
Over the past two decades, databases and the tools to access them in a simple manner have become increasingly available, allowing historical and modern-day topics to be merged and studied. Throughout the recent COVID-19 pandemic, for example, many researchers have reflected on whether any lessons learned from the Spanish flu pandemic of 1918 could have been helpful in the present pandemic. Most studies using text-mining applications rarely use full-text journal articles. This article provides a methodology used to develop a full-text journal article corpus using the R fulltext package. Using the proposed methodology, 2743 full-text journal articles were obtained. The aim of this article is to provide a methodology and supplementary codes for researchers to use the R fulltext package to curate a full-text journal corpus.
{"title":"Using R to develop a corpus of full-text journal articles","authors":"Billie Anderson, M. Bani-Yaghoub, Vagmi Kantheti, Scott Curtis","doi":"10.1177/01655515231171362","DOIUrl":"https://doi.org/10.1177/01655515231171362","url":null,"abstract":"Over the past two decades, databases and the tools to access them in a simple manner have become increasingly available, allowing historical and modern-day topics to be merged and studied. Throughout the recent COVID-19 pandemic, for example, many researchers have reflected on whether any lessons learned from the Spanish flu pandemic of 1918 could have been helpful in the present pandemic. Most studies using text-mining applications rarely use full-text journal articles. This article provides a methodology used to develop a full-text journal article corpus using the R fulltext package. Using the proposed methodology, 2743 full-text journal articles were obtained. The aim of this article is to provide a methodology and supplementary codes for researchers to use the R fulltext package to curate a full-text journal corpus.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41352467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-10DOI: 10.1177/01655515231184833
G. Schweiger, Lynn Thiermeyer
Purely quantitative citation measures are widely used to evaluate research grants, to compare the output of researcher or to benchmark universities. The intuition that not all citations are the same, however, can be illustrated by two examples. First, studies have shown that erroneous or controversial papers have higher citation counts. Second, does a high-level citation in an introduction have the same impact as a reference to a paper that serves as a conceptual starting point? Companions to purely quantitative measures are the so-called citation context analyses which aim to obtain a better understanding of the link between citing and cited work. In this article, we propose a classification scheme for citation context analysis in the field of modelling in engineering. The categories were defined based on an extensive literature review and input from experts in the field of modelling. We propose a detailed scheme with six categories ( Perfunctory, Background Information, Comparing/Confirming, Critique/Refutation, Inspiring, Using/Expanding) and a simplified scheme with three categories ( High-level, Critical Analysis, Extending) that can be used within automatic classification approaches. The results of manually classifying 129 randomly selected citations show that 87% of citations fall into the high-level category. This study confirms that critical citations are not common in written academic discourse, even though criticism is essential for scientific progress and knowledge construction.
{"title":"Modelling in engineering: A citation context analysis","authors":"G. Schweiger, Lynn Thiermeyer","doi":"10.1177/01655515231184833","DOIUrl":"https://doi.org/10.1177/01655515231184833","url":null,"abstract":"Purely quantitative citation measures are widely used to evaluate research grants, to compare the output of researcher or to benchmark universities. The intuition that not all citations are the same, however, can be illustrated by two examples. First, studies have shown that erroneous or controversial papers have higher citation counts. Second, does a high-level citation in an introduction have the same impact as a reference to a paper that serves as a conceptual starting point? Companions to purely quantitative measures are the so-called citation context analyses which aim to obtain a better understanding of the link between citing and cited work. In this article, we propose a classification scheme for citation context analysis in the field of modelling in engineering. The categories were defined based on an extensive literature review and input from experts in the field of modelling. We propose a detailed scheme with six categories ( Perfunctory, Background Information, Comparing/Confirming, Critique/Refutation, Inspiring, Using/Expanding) and a simplified scheme with three categories ( High-level, Critical Analysis, Extending) that can be used within automatic classification approaches. The results of manually classifying 129 randomly selected citations show that 87% of citations fall into the high-level category. This study confirms that critical citations are not common in written academic discourse, even though criticism is essential for scientific progress and knowledge construction.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47772103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-08DOI: 10.1177/01655515231182068
Ming Yi, Ming Liu, Cuicui Feng, Weihua Deng
Cross-domain recommendation models are proposed to enrich the knowledge in the target domain by taking advantage of the data in the auxiliary domain to mitigate sparsity and cold-start user problems. However, most of the existing cross-domain recommendation models are dependent on rating information of items, ignoring high-order information contained in the graph data structure. In this study, we develop a novel cross-domain recommendation model by unified modelling high-order information and rating information to tackle the research gaps. Different from previous research work, we apply heterogeneous graph neural network to extract high-order information among users, items and features; obtain high-order information embeddings of users and items; and then use neural network to extract rating information and obtain user rating information embeddings by a non-linear mapping function MLP (Multilayer Perceptron). Moreover, high-order information embeddings and rating information embeddings are fused in a unified way to complete the final rating prediction, and the gradient descent method is adopted to learn the parameters of the model based on the loss function. Experiments conducted on two real-world data sets including 3,032,642 ratings from two experimental scenarios demonstrate that our model can effectively alleviate the problems of sparsity and cold-start users simultaneously, and significantly outperforms the baseline models using a variety of recommendation accuracy metrics.
{"title":"A cross-domain recommendation model by unified modelling high-order information and rating information","authors":"Ming Yi, Ming Liu, Cuicui Feng, Weihua Deng","doi":"10.1177/01655515231182068","DOIUrl":"https://doi.org/10.1177/01655515231182068","url":null,"abstract":"Cross-domain recommendation models are proposed to enrich the knowledge in the target domain by taking advantage of the data in the auxiliary domain to mitigate sparsity and cold-start user problems. However, most of the existing cross-domain recommendation models are dependent on rating information of items, ignoring high-order information contained in the graph data structure. In this study, we develop a novel cross-domain recommendation model by unified modelling high-order information and rating information to tackle the research gaps. Different from previous research work, we apply heterogeneous graph neural network to extract high-order information among users, items and features; obtain high-order information embeddings of users and items; and then use neural network to extract rating information and obtain user rating information embeddings by a non-linear mapping function MLP (Multilayer Perceptron). Moreover, high-order information embeddings and rating information embeddings are fused in a unified way to complete the final rating prediction, and the gradient descent method is adopted to learn the parameters of the model based on the loss function. Experiments conducted on two real-world data sets including 3,032,642 ratings from two experimental scenarios demonstrate that our model can effectively alleviate the problems of sparsity and cold-start users simultaneously, and significantly outperforms the baseline models using a variety of recommendation accuracy metrics.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48228313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}