首页 > 最新文献

Journal of Information Science最新文献

英文 中文
Technology acceptance research: Meta-analysis 技术接受度研究:元分析
IF 2.4 4区 管理学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-08-10 DOI: 10.1177/01655515231191177
D. Marikyan, S. Papagiannidis, G. Stewart
Rapid digitalisation has resulted in a literature about technology acceptance that is ever increasing in size, naturally creating debates about the developments in the field and their implications. Given the size of the literature and the range of factors, theories and applications considered, this article reviewed the relevant literature using a meta-analytical approach. The objective of this review was twofold: (a) to provide a comprehensive analysis of the factors contributing to technology acceptance and investigate their effects, depending on theoretical underpinnings, and (b) to explore the conditions explaining the variance in the effects of predictors time-, application- and journal-wise. This review analysed data from 693 papers. A total of 21 independent predictors having differential effects on attitude, intention and use behaviour were found. The effects of the predictors were different depending on the theoretical frameworks they were related to. The analysis of the consistency of the role of the predictors suggested that there was no longitudinal change in their effect sizes. However, a significant variance was found when comparing predictors across research applications and the journals in which the papers were published. The analysis of publication bias demonstrated a tendency to publish studies with significant results, although no evidence was found of p-value manipulation.
快速的数字化导致了关于技术接受度的文献越来越多,自然引发了关于该领域发展及其影响的辩论。考虑到文献的规模和考虑的因素、理论和应用的范围,本文使用元分析方法回顾了相关文献。本综述的目的有两个:(a)根据理论基础,对影响技术接受度的因素进行全面分析,并调查其影响;(b)探索解释预测因子在时间、应用和期刊方面影响差异的条件。这篇综述分析了693篇论文的数据。共有21个独立的预测因子对态度、意图和使用行为有不同的影响。预测因子的效果因其所涉及的理论框架而异。对预测因子作用的一致性分析表明,它们的效应大小没有纵向变化。然而,当比较研究应用程序和论文发表的期刊的预测因子时,发现了显著的差异。对发表偏倚的分析表明,尽管没有发现p值操纵的证据,但发表具有显著结果的研究的倾向。
{"title":"Technology acceptance research: Meta-analysis","authors":"D. Marikyan, S. Papagiannidis, G. Stewart","doi":"10.1177/01655515231191177","DOIUrl":"https://doi.org/10.1177/01655515231191177","url":null,"abstract":"Rapid digitalisation has resulted in a literature about technology acceptance that is ever increasing in size, naturally creating debates about the developments in the field and their implications. Given the size of the literature and the range of factors, theories and applications considered, this article reviewed the relevant literature using a meta-analytical approach. The objective of this review was twofold: (a) to provide a comprehensive analysis of the factors contributing to technology acceptance and investigate their effects, depending on theoretical underpinnings, and (b) to explore the conditions explaining the variance in the effects of predictors time-, application- and journal-wise. This review analysed data from 693 papers. A total of 21 independent predictors having differential effects on attitude, intention and use behaviour were found. The effects of the predictors were different depending on the theoretical frameworks they were related to. The analysis of the consistency of the role of the predictors suggested that there was no longitudinal change in their effect sizes. However, a significant variance was found when comparing predictors across research applications and the journals in which the papers were published. The analysis of publication bias demonstrated a tendency to publish studies with significant results, although no evidence was found of p-value manipulation.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44804980","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Tracing policy diffusion: Identifying main paths in policy citation networks 追踪政策扩散:确定政策引用网络中的主要路径
IF 2.4 4区 管理学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-08-09 DOI: 10.1177/01655515231189660
Zhichao Ba, Yao Tang, Xuetai Liu, Yikun Xia
Citation-based main path analysis (MPA) has been widely applied to identify developmental trajectories of science and technology, while rarely used to detect paths of policy diffusion. Compared with scientific publications and patents, policy documents show some distinct characteristics, such as citation relationships with different legal validity, which could be considered to improve the policy citation analysis. To this end, this study formally constructs a policy citation network based on a plethora of citing/cited links embedded in the textual content of policy documents and proposes a preference-adjusted main path analysis (PMPA) approach to track historical routes of policy diffusion. PMPA incorporates two kinds of policy citation preferences, including validity bias and time bias. An evidence analysis from China’s new energy policies (NEPs) is implemented to show the efficacy of the proposed approach. The results unveil that the preference-adjusted main path approach can capture more important policies and more informative main paths of policy diffusion than the original MPA. Moreover, our research can yield in-depth insight into the evolutionary process of policy diffusion and provide guidance for policy-makers and industry decision-makers to formulate practical policy-making.
基于引文的主路径分析(MPA)被广泛应用于科学技术发展轨迹的识别,但很少用于政策扩散路径的检测。与科技出版物和专利相比,政策文献表现出不同法律效力的引文关系等明显特征,可以考虑完善政策文献的引文分析。为此,本研究基于嵌入在政策文件文本内容中的大量被引/被引链接,正式构建了政策引用网络,并提出了偏好调整的主路径分析(PMPA)方法来追踪政策扩散的历史路径。PMPA包含两种政策引用偏好,包括效度偏倚和时间偏倚。通过对中国新能源政策(NEPs)的实证分析,证明了该方法的有效性。结果表明,与原始MPA相比,偏好调整的主要路径方法可以捕获更多重要的政策和更具信息量的政策扩散主要路径。此外,我们的研究可以深入了解政策扩散的演化过程,为政策制定者和行业决策者制定切合实际的政策提供指导。
{"title":"Tracing policy diffusion: Identifying main paths in policy citation networks","authors":"Zhichao Ba, Yao Tang, Xuetai Liu, Yikun Xia","doi":"10.1177/01655515231189660","DOIUrl":"https://doi.org/10.1177/01655515231189660","url":null,"abstract":"Citation-based main path analysis (MPA) has been widely applied to identify developmental trajectories of science and technology, while rarely used to detect paths of policy diffusion. Compared with scientific publications and patents, policy documents show some distinct characteristics, such as citation relationships with different legal validity, which could be considered to improve the policy citation analysis. To this end, this study formally constructs a policy citation network based on a plethora of citing/cited links embedded in the textual content of policy documents and proposes a preference-adjusted main path analysis (PMPA) approach to track historical routes of policy diffusion. PMPA incorporates two kinds of policy citation preferences, including validity bias and time bias. An evidence analysis from China’s new energy policies (NEPs) is implemented to show the efficacy of the proposed approach. The results unveil that the preference-adjusted main path approach can capture more important policies and more informative main paths of policy diffusion than the original MPA. Moreover, our research can yield in-depth insight into the evolutionary process of policy diffusion and provide guidance for policy-makers and industry decision-makers to formulate practical policy-making.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48662179","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A transformer-based deep learning model for Persian moral sentiment analysis 波斯人道德情感分析的基于变换的深度学习模型
IF 2.4 4区 管理学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-08-02 DOI: 10.1177/01655515231188344
Behnam Karami, F. Bakouie, S. Gharibzadeh
Moral expressions in online communications can have a serious impact on framing discussions and subsequent online behaviours. Despite research on extracting moral sentiment from English text, other low-resource languages, such as Persian, lack enough resources and research about this important topic. We address this issue using the Moral Foundation theory (MFT) as the theoretical moral psychology paradigm. We developed a Twitter data set of 8000 tweets that are manually annotated for moral foundations and also we established a baseline for computing moral sentiment from Persian text. We evaluate a plethora of state-of-the-art machine learning models, both rule-based and neural, including distributed dictionary representation (DDR), long short-term memory (LSTM) and bidirectional encoder representations from transformer (BERT). Our findings show that among different models, fine-tuning a pre-trained Persian BERT language model with a linear network as the classifier yields the best results. Furthermore, we analysed this model to find out which layer of the model contributes most to this superior accuracy. We also proposed an alternative transformer-based model that yields competitive results to the BERT model despite its lower size and faster inference time. The proposed model can be used as a tool for analysing moral sentiment and framing in Persian texts for downstream social and psychological studies. We also hope our work provides some resources for further enhancing the methods for computing moral sentiment in Persian text.
网络交流中的道德表达会对框架讨论和随后的网络行为产生严重影响。尽管对从英语文本中提取道德情感进行了研究,但其他低资源语言,如波斯语,缺乏足够的资源和对这一重要主题的研究。我们使用道德基础理论(MFT)作为道德心理学的理论范式来解决这个问题。我们开发了一个由8000条推文组成的推特数据集,这些推文被手动注释为道德基础,我们还建立了一个从波斯文本中计算道德情感的基线。我们评估了大量最先进的机器学习模型,包括基于规则的和神经的,包括分布式字典表示(DDR)、长短期记忆(LSTM)和来自转换器的双向编码器表示(BERT)。我们的研究结果表明,在不同的模型中,用线性网络作为分类器对预先训练的波斯BERT语言模型进行微调会产生最好的结果。此外,我们分析了这个模型,以找出模型的哪一层对这种卓越的精度贡献最大。我们还提出了一种基于变换器的替代模型,尽管BERT模型具有较低的规模和较快的推理时间,但它能产生与BERT模型有竞争力的结果。所提出的模型可以作为分析波斯文本中道德情感和框架的工具,用于下游的社会和心理研究。我们也希望我们的工作能为进一步改进波斯文本中道德情感的计算方法提供一些资源。
{"title":"A transformer-based deep learning model for Persian moral sentiment analysis","authors":"Behnam Karami, F. Bakouie, S. Gharibzadeh","doi":"10.1177/01655515231188344","DOIUrl":"https://doi.org/10.1177/01655515231188344","url":null,"abstract":"Moral expressions in online communications can have a serious impact on framing discussions and subsequent online behaviours. Despite research on extracting moral sentiment from English text, other low-resource languages, such as Persian, lack enough resources and research about this important topic. We address this issue using the Moral Foundation theory (MFT) as the theoretical moral psychology paradigm. We developed a Twitter data set of 8000 tweets that are manually annotated for moral foundations and also we established a baseline for computing moral sentiment from Persian text. We evaluate a plethora of state-of-the-art machine learning models, both rule-based and neural, including distributed dictionary representation (DDR), long short-term memory (LSTM) and bidirectional encoder representations from transformer (BERT). Our findings show that among different models, fine-tuning a pre-trained Persian BERT language model with a linear network as the classifier yields the best results. Furthermore, we analysed this model to find out which layer of the model contributes most to this superior accuracy. We also proposed an alternative transformer-based model that yields competitive results to the BERT model despite its lower size and faster inference time. The proposed model can be used as a tool for analysing moral sentiment and framing in Persian texts for downstream social and psychological studies. We also hope our work provides some resources for further enhancing the methods for computing moral sentiment in Persian text.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47505183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing the credibility of COVID-19 vaccine mis/disinformation in online discussion. 评估在线讨论中COVID-19疫苗误传/虚假信息的可信度。
IF 2.4 4区 管理学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-08-01 DOI: 10.1177/01655515211040653
Reijo Savolainen

This study examines how the credibility of the content of mis- or disinformation, as well as the believability of authors creating such information is assessed in online discussion. More specifically, the investigation was focused on the credibility of mis- or disinformation about COVID-19 vaccines. To this end, a sample of 1887 messages posted to a Reddit discussion group was scrutinised by means of qualitative content analysis. The findings indicate that in the assessment of the author's credibility, the most important criteria are his or her reputation, expertise and honesty in argumentation. In the judgement of the credibility of the content of mis/disinformation, objectivity of information and plausibility of arguments are highly important. The findings highlight that in the assessment of the credibility of mis/disinformation, the author's qualities such as poor reputation, incompetency and dishonesty are particularly significant because they trigger expectancies about how the information content created by the author is judged.

本研究考察了在线讨论中如何评估错误或虚假信息内容的可信度,以及作者创造此类信息的可信度。更具体地说,调查的重点是关于COVID-19疫苗的错误或虚假信息的可信度。为此,通过定性内容分析的方法,对发布到Reddit讨论组的1887条信息进行了仔细审查。研究结果表明,在评估作者可信度时,最重要的标准是作者的声誉、专业知识和论证诚实度。在对虚假信息内容可信度的判断中,信息的客观性和论证的合理性是非常重要的。研究结果强调,在评估错误/虚假信息的可信度时,作者的不良声誉、不称职和不诚实等品质尤为重要,因为它们会引发人们对作者所创造的信息内容如何被评判的预期。
{"title":"Assessing the credibility of COVID-19 vaccine mis/disinformation in online discussion.","authors":"Reijo Savolainen","doi":"10.1177/01655515211040653","DOIUrl":"https://doi.org/10.1177/01655515211040653","url":null,"abstract":"<p><p>This study examines how the credibility of the content of mis- or disinformation, as well as the believability of authors creating such information is assessed in online discussion. More specifically, the investigation was focused on the credibility of mis- or disinformation about COVID-19 vaccines. To this end, a sample of 1887 messages posted to a Reddit discussion group was scrutinised by means of qualitative content analysis. The findings indicate that in the assessment of the author's credibility, the most important criteria are his or her reputation, expertise and honesty in argumentation. In the judgement of the credibility of the content of mis/disinformation, objectivity of information and plausibility of arguments are highly important. The findings highlight that in the assessment of the credibility of mis/disinformation, the author's qualities such as poor reputation, incompetency and dishonesty are particularly significant because they trigger expectancies about how the information content created by the author is judged.</p>","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":"49 4","pages":"1096-1110"},"PeriodicalIF":2.4,"publicationDate":"2023-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10345821/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10189535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Semantics-aware query expansion using pseudo-relevance feedback 使用伪相关反馈的语义感知查询扩展
IF 2.4 4区 管理学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-22 DOI: 10.1177/01655515231184831
Pankaj Singh, Plaban Kumar Bhowmick
In this article, a pseudo-relevance feedback (PRF)–based framework is presented for effective query expansion (QE). As candidate expansion terms, the proposed PRF framework considers the terms that are different morphological variants of the original query terms and are semantically close to them. This strategy of selecting expansion terms is expected to preserve the query intent after expansion. While judging the suitability of an expansion term with respect to a base query, two aspects of relation of the term with the query are considered. The first aspect probes to what extent the candidate term is semantically linked to the original query and the second one checks the extent to which the candidate term can supplement the base query terms. The semantic relationship between a query and expansion terms is modelled using bidirectional encoder representations from transformers (BERT). The degree of similarity is used to estimate the relative importance of the expansion terms with respect to the query. The quantified relative importance is used to assign weights of the expansion terms in the final query. Finally, the expansion terms are grouped into semantic clusters to strengthen the original query intent. A set of experiments was performed on three different Text REtrieval Conference (TREC) collections to experimentally validate the effectiveness of the proposed QE algorithm. The results show that the proposed QE approach yields competitive retrieval effectiveness over the existing state-of-the-art PRF methods in terms of the mean average precision (MAP) and precision P at position 10 (P@10).
在本文中,提出了一个基于伪相关反馈(PRF)的有效查询扩展(QE)框架。作为候选扩展术语,所提出的PRF框架考虑了作为原始查询术语的不同形态变体并且在语义上与它们接近的术语。这种选择扩展术语的策略有望在扩展后保留查询意图。在判断扩展项相对于基本查询的适用性时,考虑了扩展项与查询关系的两个方面。第一个方面探讨候选术语在多大程度上与原始查询语义链接,第二个方面检查候选术语可以补充基本查询术语的程度。查询和扩展项之间的语义关系是使用来自转换器(BERT)的双向编码器表示来建模的。相似度用于估计扩展项相对于查询的相对重要性。量化的相对重要性用于在最终查询中分配展开项的权重。最后,将扩展术语分组到语义聚类中,以增强原始查询意图。在三个不同的文本检索会议(TREC)集合上进行了一组实验,以实验验证所提出的QE算法的有效性。结果表明,在平均平均精度(MAP)和位置10处的精度P(P@10)方面,所提出的QE方法与现有的最先进的PRF方法相比具有竞争性的检索效率。
{"title":"Semantics-aware query expansion using pseudo-relevance feedback","authors":"Pankaj Singh, Plaban Kumar Bhowmick","doi":"10.1177/01655515231184831","DOIUrl":"https://doi.org/10.1177/01655515231184831","url":null,"abstract":"In this article, a pseudo-relevance feedback (PRF)–based framework is presented for effective query expansion (QE). As candidate expansion terms, the proposed PRF framework considers the terms that are different morphological variants of the original query terms and are semantically close to them. This strategy of selecting expansion terms is expected to preserve the query intent after expansion. While judging the suitability of an expansion term with respect to a base query, two aspects of relation of the term with the query are considered. The first aspect probes to what extent the candidate term is semantically linked to the original query and the second one checks the extent to which the candidate term can supplement the base query terms. The semantic relationship between a query and expansion terms is modelled using bidirectional encoder representations from transformers (BERT). The degree of similarity is used to estimate the relative importance of the expansion terms with respect to the query. The quantified relative importance is used to assign weights of the expansion terms in the final query. Finally, the expansion terms are grouped into semantic clusters to strengthen the original query intent. A set of experiments was performed on three different Text REtrieval Conference (TREC) collections to experimentally validate the effectiveness of the proposed QE algorithm. The results show that the proposed QE approach yields competitive retrieval effectiveness over the existing state-of-the-art PRF methods in terms of the mean average precision (MAP) and precision P at position 10 (P@10).","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42263993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recognising formula entailment using long short-term memory network 利用长短期记忆网络识别公式蕴涵
IF 2.4 4区 管理学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-20 DOI: 10.1177/01655515231184826
Amarnath Pathak, Partha Pakray
The article presents an approach to recognise formula entailment, which concerns finding entailment relationships between pairs of math formulae. As the current formula-similarity-detection approaches fail to account for broader relationships between pairs of math formulae, recognising formula entailment becomes paramount. To this end, a long short-term memory (LSTM) neural network using symbol-by-symbol attention for recognising formula entailment is implemented. However, owing to the unavailability of relevant training and validation corpora, the first and foremost step is to create a sufficiently large-sized symbol-level MATHENTAIL data set in an automated fashion. Depending on the extent of similarity between the corresponding symbol embeddings, the symbol pairs in the MATHENTAIL data set are assigned ‘entailment’ or ‘neutral’ labels. An improved symbol-to-vector (isymbol2vec) method generates mathematical symbols (in LATEX) and their embeddings using the Wikipedia corpus of scientific documents and Continuous Bag of Words (CBOW) architecture. Eventually, the LSTM network, trained and validated using the MATHENTAIL data set, predicts formulae entailment for test formulae pairs with a reasonable accuracy of 62.2%.
本文提出了一种识别公式蕴涵的方法,它涉及到寻找数学公式对之间的蕴涵关系。由于目前的公式相似度检测方法无法解释数学公式对之间更广泛的关系,因此识别公式蕴涵变得至关重要。为此,实现了一种长短期记忆(LSTM)神经网络,利用逐符号注意来识别公式蕴涵。然而,由于缺乏相关的训练和验证语料库,第一步也是最重要的一步是以自动化的方式创建一个足够大的符号级mathenthada数据集。根据相应符号嵌入之间的相似程度,math限定数据集中的符号对被分配为“蕴涵”或“中性”标签。一种改进的符号到向量(isymbol2vec)方法使用维基百科的科学文档语料库和连续词袋(CBOW)架构生成数学符号(在LATEX中)及其嵌入。最后,使用math限定数据集训练和验证的LSTM网络预测了测试公式对的公式蕴涵,合理的准确率为62.2%。
{"title":"Recognising formula entailment using long short-term memory network","authors":"Amarnath Pathak, Partha Pakray","doi":"10.1177/01655515231184826","DOIUrl":"https://doi.org/10.1177/01655515231184826","url":null,"abstract":"The article presents an approach to recognise formula entailment, which concerns finding entailment relationships between pairs of math formulae. As the current formula-similarity-detection approaches fail to account for broader relationships between pairs of math formulae, recognising formula entailment becomes paramount. To this end, a long short-term memory (LSTM) neural network using symbol-by-symbol attention for recognising formula entailment is implemented. However, owing to the unavailability of relevant training and validation corpora, the first and foremost step is to create a sufficiently large-sized symbol-level MATHENTAIL data set in an automated fashion. Depending on the extent of similarity between the corresponding symbol embeddings, the symbol pairs in the MATHENTAIL data set are assigned ‘entailment’ or ‘neutral’ labels. An improved symbol-to-vector (isymbol2vec) method generates mathematical symbols (in LATEX) and their embeddings using the Wikipedia corpus of scientific documents and Continuous Bag of Words (CBOW) architecture. Eventually, the LSTM network, trained and validated using the MATHENTAIL data set, predicts formulae entailment for test formulae pairs with a reasonable accuracy of 62.2%.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48101463","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Internet of things adoption and use in academic libraries: A review and directions for future research 物联网在高校图书馆的采用与利用:回顾与未来研究方向
IF 2.4 4区 管理学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-20 DOI: 10.1177/01655515231188338
M. Asim, Muhammad Arif
This study aims to synthesise the findings of research on Internet of Things (IoTs) adoption and use in libraries. This systematic literature review is based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method and comprises publications in the five world-renowned databases. The libraries adopted IoTs for saving time, enhance performance and efficiency, improve the quality of services, and ease in collection accessibility. This study identified various IoTs-based practices including auto-notification of circulation tasks, inventory management, tracing users’ data from virtual/physical card, user tracking and self-guided virtual tour of library. To adopt and use IoTs, libraries faced several challenges such as security and privacy, cost, lack of standards and policy, require highly integrated environment, and lack of management interest. The critical IoTs adoption and usage factors as well as various challenges identified would provide valuable insights to library professionals to design state-of-the-art smart technologies drive services.
本研究旨在综合图书馆采用和使用物联网的研究结果。本系统文献综述基于系统综述和荟萃分析的首选报告项目(PRISMA)方法,包括五个世界知名数据库中的出版物。图书馆采用IoT是为了节省时间、提高性能和效率、提高服务质量以及方便馆藏访问。这项研究确定了各种基于IoT的实践,包括流通任务的自动通知、库存管理、从虚拟/物理卡跟踪用户数据、用户跟踪和图书馆自助虚拟参观。为了采用和使用IoT,图书馆面临着一些挑战,如安全和隐私、成本、缺乏标准和政策、需要高度集成的环境以及缺乏管理兴趣。关键的IoT采用和使用因素以及确定的各种挑战将为图书馆专业人员设计最先进的智能技术驱动服务提供宝贵的见解。
{"title":"Internet of things adoption and use in academic libraries: A review and directions for future research","authors":"M. Asim, Muhammad Arif","doi":"10.1177/01655515231188338","DOIUrl":"https://doi.org/10.1177/01655515231188338","url":null,"abstract":"This study aims to synthesise the findings of research on Internet of Things (IoTs) adoption and use in libraries. This systematic literature review is based on Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) method and comprises publications in the five world-renowned databases. The libraries adopted IoTs for saving time, enhance performance and efficiency, improve the quality of services, and ease in collection accessibility. This study identified various IoTs-based practices including auto-notification of circulation tasks, inventory management, tracing users’ data from virtual/physical card, user tracking and self-guided virtual tour of library. To adopt and use IoTs, libraries faced several challenges such as security and privacy, cost, lack of standards and policy, require highly integrated environment, and lack of management interest. The critical IoTs adoption and usage factors as well as various challenges identified would provide valuable insights to library professionals to design state-of-the-art smart technologies drive services.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44101101","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Using R to develop a corpus of full-text journal articles 使用R开发全文期刊文章的语料库
IF 2.4 4区 管理学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-14 DOI: 10.1177/01655515231171362
Billie Anderson, M. Bani-Yaghoub, Vagmi Kantheti, Scott Curtis
Over the past two decades, databases and the tools to access them in a simple manner have become increasingly available, allowing historical and modern-day topics to be merged and studied. Throughout the recent COVID-19 pandemic, for example, many researchers have reflected on whether any lessons learned from the Spanish flu pandemic of 1918 could have been helpful in the present pandemic. Most studies using text-mining applications rarely use full-text journal articles. This article provides a methodology used to develop a full-text journal article corpus using the R fulltext package. Using the proposed methodology, 2743 full-text journal articles were obtained. The aim of this article is to provide a methodology and supplementary codes for researchers to use the R fulltext package to curate a full-text journal corpus.
在过去的二十年里,数据库和以一种简单的方式访问它们的工具变得越来越可用,允许历史和现代主题合并和研究。例如,在最近的COVID-19大流行期间,许多研究人员都在思考,从1918年西班牙流感大流行中吸取的教训是否对当前的大流行有所帮助。大多数使用文本挖掘应用程序的研究很少使用全文期刊文章。本文提供了一种使用R全文包开发全文期刊文章语料库的方法。使用所提出的方法,获得2743篇全文期刊文章。本文的目的是为研究人员提供一种方法和补充代码,以使用R全文包来策划全文期刊语料库。
{"title":"Using R to develop a corpus of full-text journal articles","authors":"Billie Anderson, M. Bani-Yaghoub, Vagmi Kantheti, Scott Curtis","doi":"10.1177/01655515231171362","DOIUrl":"https://doi.org/10.1177/01655515231171362","url":null,"abstract":"Over the past two decades, databases and the tools to access them in a simple manner have become increasingly available, allowing historical and modern-day topics to be merged and studied. Throughout the recent COVID-19 pandemic, for example, many researchers have reflected on whether any lessons learned from the Spanish flu pandemic of 1918 could have been helpful in the present pandemic. Most studies using text-mining applications rarely use full-text journal articles. This article provides a methodology used to develop a full-text journal article corpus using the R fulltext package. Using the proposed methodology, 2743 full-text journal articles were obtained. The aim of this article is to provide a methodology and supplementary codes for researchers to use the R fulltext package to curate a full-text journal corpus.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41352467","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modelling in engineering: A citation context analysis 工程建模:引文语境分析
IF 2.4 4区 管理学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-10 DOI: 10.1177/01655515231184833
G. Schweiger, Lynn Thiermeyer
Purely quantitative citation measures are widely used to evaluate research grants, to compare the output of researcher or to benchmark universities. The intuition that not all citations are the same, however, can be illustrated by two examples. First, studies have shown that erroneous or controversial papers have higher citation counts. Second, does a high-level citation in an introduction have the same impact as a reference to a paper that serves as a conceptual starting point? Companions to purely quantitative measures are the so-called citation context analyses which aim to obtain a better understanding of the link between citing and cited work. In this article, we propose a classification scheme for citation context analysis in the field of modelling in engineering. The categories were defined based on an extensive literature review and input from experts in the field of modelling. We propose a detailed scheme with six categories ( Perfunctory, Background Information, Comparing/Confirming, Critique/Refutation, Inspiring, Using/Expanding) and a simplified scheme with three categories ( High-level, Critical Analysis, Extending) that can be used within automatic classification approaches. The results of manually classifying 129 randomly selected citations show that 87% of citations fall into the high-level category. This study confirms that critical citations are not common in written academic discourse, even though criticism is essential for scientific progress and knowledge construction.
纯粹的定量引用措施被广泛用于评估研究资助,比较研究人员或基准大学的产出。然而,并非所有引用都是相同的直觉可以用两个例子来说明。首先,研究表明,错误或有争议的论文有更高的引用率。第二,引言中的高水平引文是否与作为概念起点的论文参考文献具有相同的影响?所谓的引文上下文分析是纯定量测量的伙伴,旨在更好地理解引文与被引作品之间的联系。在本文中,我们提出了一个用于工程建模领域引文上下文分析的分类方案。这些类别是基于广泛的文献回顾和建模领域专家的输入来定义的。我们提出了一个详细的方案,包括六个类别(肤浅,背景信息,比较/确认,批评/反驳,鼓舞,使用/扩展)和一个简化的方案,包括三个类别(高级,关键分析,扩展),可以在自动分类方法中使用。对随机选取的129篇引文进行人工分类,结果显示87%的引文属于高水平类别。本研究证实,尽管批评对科学进步和知识建构至关重要,但在书面学术话语中,批评引用并不常见。
{"title":"Modelling in engineering: A citation context analysis","authors":"G. Schweiger, Lynn Thiermeyer","doi":"10.1177/01655515231184833","DOIUrl":"https://doi.org/10.1177/01655515231184833","url":null,"abstract":"Purely quantitative citation measures are widely used to evaluate research grants, to compare the output of researcher or to benchmark universities. The intuition that not all citations are the same, however, can be illustrated by two examples. First, studies have shown that erroneous or controversial papers have higher citation counts. Second, does a high-level citation in an introduction have the same impact as a reference to a paper that serves as a conceptual starting point? Companions to purely quantitative measures are the so-called citation context analyses which aim to obtain a better understanding of the link between citing and cited work. In this article, we propose a classification scheme for citation context analysis in the field of modelling in engineering. The categories were defined based on an extensive literature review and input from experts in the field of modelling. We propose a detailed scheme with six categories ( Perfunctory, Background Information, Comparing/Confirming, Critique/Refutation, Inspiring, Using/Expanding) and a simplified scheme with three categories ( High-level, Critical Analysis, Extending) that can be used within automatic classification approaches. The results of manually classifying 129 randomly selected citations show that 87% of citations fall into the high-level category. This study confirms that critical citations are not common in written academic discourse, even though criticism is essential for scientific progress and knowledge construction.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47772103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A cross-domain recommendation model by unified modelling high-order information and rating information 通过对高阶信息和评价信息的统一建模,建立了跨域推荐模型
IF 2.4 4区 管理学 Q3 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2023-07-08 DOI: 10.1177/01655515231182068
Ming Yi, Ming Liu, Cuicui Feng, Weihua Deng
Cross-domain recommendation models are proposed to enrich the knowledge in the target domain by taking advantage of the data in the auxiliary domain to mitigate sparsity and cold-start user problems. However, most of the existing cross-domain recommendation models are dependent on rating information of items, ignoring high-order information contained in the graph data structure. In this study, we develop a novel cross-domain recommendation model by unified modelling high-order information and rating information to tackle the research gaps. Different from previous research work, we apply heterogeneous graph neural network to extract high-order information among users, items and features; obtain high-order information embeddings of users and items; and then use neural network to extract rating information and obtain user rating information embeddings by a non-linear mapping function MLP (Multilayer Perceptron). Moreover, high-order information embeddings and rating information embeddings are fused in a unified way to complete the final rating prediction, and the gradient descent method is adopted to learn the parameters of the model based on the loss function. Experiments conducted on two real-world data sets including 3,032,642 ratings from two experimental scenarios demonstrate that our model can effectively alleviate the problems of sparsity and cold-start users simultaneously, and significantly outperforms the baseline models using a variety of recommendation accuracy metrics.
提出了跨领域推荐模型,通过利用辅助领域的数据来丰富目标领域的知识,以减轻稀疏性和冷启动用户问题。然而,现有的跨域推荐模型大多依赖于商品的评级信息,忽略了图数据结构中包含的高阶信息。在本研究中,我们通过对高阶信息和评级信息的统一建模,开发了一种新的跨领域推荐模型,以解决研究空白。与以往的研究工作不同,我们利用异构图神经网络来提取用户、项目和特征之间的高阶信息;获取用户和物品的高阶信息嵌入;然后利用神经网络提取评分信息,通过非线性映射函数MLP (Multilayer Perceptron)得到用户评分信息嵌入。将高阶信息嵌入与评级信息嵌入统一融合,完成最终评级预测,并采用基于损失函数的梯度下降法学习模型参数。在两个真实世界的数据集(包括来自两个实验场景的3,032,642个评分)上进行的实验表明,我们的模型可以有效地同时缓解稀疏性和冷启动用户的问题,并且使用各种推荐精度指标显着优于基线模型。
{"title":"A cross-domain recommendation model by unified modelling high-order information and rating information","authors":"Ming Yi, Ming Liu, Cuicui Feng, Weihua Deng","doi":"10.1177/01655515231182068","DOIUrl":"https://doi.org/10.1177/01655515231182068","url":null,"abstract":"Cross-domain recommendation models are proposed to enrich the knowledge in the target domain by taking advantage of the data in the auxiliary domain to mitigate sparsity and cold-start user problems. However, most of the existing cross-domain recommendation models are dependent on rating information of items, ignoring high-order information contained in the graph data structure. In this study, we develop a novel cross-domain recommendation model by unified modelling high-order information and rating information to tackle the research gaps. Different from previous research work, we apply heterogeneous graph neural network to extract high-order information among users, items and features; obtain high-order information embeddings of users and items; and then use neural network to extract rating information and obtain user rating information embeddings by a non-linear mapping function MLP (Multilayer Perceptron). Moreover, high-order information embeddings and rating information embeddings are fused in a unified way to complete the final rating prediction, and the gradient descent method is adopted to learn the parameters of the model based on the loss function. Experiments conducted on two real-world data sets including 3,032,642 ratings from two experimental scenarios demonstrate that our model can effectively alleviate the problems of sparsity and cold-start users simultaneously, and significantly outperforms the baseline models using a variety of recommendation accuracy metrics.","PeriodicalId":54796,"journal":{"name":"Journal of Information Science","volume":" ","pages":""},"PeriodicalIF":2.4,"publicationDate":"2023-07-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48228313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Information Science
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1