Communicating insights from data effectively requires design skills, technical knowledge, and experience. Data must be accurately represented with aesthetically pleasing visuals and engaging text to effectively communicate to the intended audience. Data storytelling has received much attention lately, but as of yet, it does not have a theoretical and practical foundation in information science. A data story adds context, narrative, and structure to the visual representation of data, providing audiences with character, plot, and a holistic experience of narrative. This paper proposes a methodological approach to transform a data visualization into a data story based on the Data-Information-Knowledge-Wisdom (DIKW) pyramid and the S-DIKW Framework. Starting from the bottom of the pyramid, the proposed approach defines a strategy to represent insights extracted from data. Data is then turned into information by identifying character(s) facing a problem, adding textual and graphic content; information is turned into knowledge by organizing what happens as a plot. Finally, a call to wise action—always informed by cultural and community values—completes the storytelling transformation to create a data story. This article contributes to the theoretical understanding of data stories as emerging information forms, supporting richer understandings of a story as information in the information sciences.
有效地从数据中传达见解需要设计技能、技术知识和经验。数据必须用美观的视觉效果和引人入胜的文本准确地表示,以便有效地与目标受众进行沟通。数据讲故事最近受到了很多关注,但到目前为止,它在信息科学中还没有理论和实践基础。数据故事将背景、叙事和结构添加到数据的视觉表现中,为观众提供人物、情节和叙事的整体体验。本文提出了一种基于数据-信息-知识-智慧(data - information - knowledge - wisdom, DIKW)金字塔和S-DIKW框架将数据可视化转化为数据故事的方法方法。从金字塔的底部开始,提出的方法定义了一种策略来表示从数据中提取的见解。然后,通过识别面临问题的字符,添加文本和图形内容,将数据转化为信息;通过将发生的事情组织成一个情节,信息变成了知识。最后,呼吁采取明智的行动——总是在文化和社区价值观的指导下——完成了从讲故事到创造数据故事的转变。本文有助于从理论上理解作为新兴信息形式的数据故事,支持将故事作为信息科学中的信息进行更丰富的理解。
{"title":"Using the S-DIKW framework to transform data visualization into data storytelling","authors":"Angelica Lo Duca, Kate McDowell","doi":"10.1002/asi.24973","DOIUrl":"10.1002/asi.24973","url":null,"abstract":"<p>Communicating insights from data effectively requires design skills, technical knowledge, and experience. Data must be accurately represented with aesthetically pleasing visuals and engaging text to effectively communicate to the intended audience. Data storytelling has received much attention lately, but as of yet, it does not have a theoretical and practical foundation in information science. A data story adds context, narrative, and structure to the visual representation of data, providing audiences with character, plot, and a holistic experience of narrative. This paper proposes a methodological approach to transform a data visualization into a data story based on the Data-Information-Knowledge-Wisdom (DIKW) pyramid and the S-DIKW Framework. Starting from the bottom of the pyramid, the proposed approach defines a strategy to represent insights extracted from data. Data is then turned into information by identifying character(s) facing a problem, adding textual and graphic content; information is turned into knowledge by organizing what happens as a plot. Finally, a call to wise action—always informed by cultural and community values—completes the storytelling transformation to create a data story. This article contributes to the theoretical understanding of data stories as emerging information forms, supporting richer understandings of a story as information in the information sciences.</p>","PeriodicalId":48810,"journal":{"name":"Journal of the Association for Information Science and Technology","volume":"76 5","pages":"803-818"},"PeriodicalIF":4.3,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asi.24973","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143801665","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This review paper explores the evolution of discussions about “long-tail” scientific data in the scholarly literature. The “long-tail” concept, originally used to explain trends in digital consumer goods, was first applied to scientific data in 2007 to refer to a vast array of smaller, heterogeneous data collections that cumulatively represent a substantial portion of scientific knowledge. However, these datasets, often referred to as “long-tail data,” are frequently mismanaged or overlooked due to inadequate data management practices and institutional support. This paper examines the changing landscape of discussions about long-tail data over time, situated within broader ecosystems of research data management and the natural interplay between “big” and “small” data. The review also bridges discussions on data curation in Library & Information Science (LIS) and domain-specific contexts, contributing to a more comprehensive understanding of the long-tail concept's utility for effective data management outcomes. The review aims to provide a more comprehensive understanding of this concept, its terminological diversity in the literature, and its utility for guiding data management, overall informing current and future information science research and practice.
{"title":"Evolution of the “long-tail” concept for scientific data: An Annual Review of Information Science and Technology (ARIST) paper","authors":"Gretchen R. Stahlman, Inna Kouper","doi":"10.1002/asi.24967","DOIUrl":"https://doi.org/10.1002/asi.24967","url":null,"abstract":"<p>This review paper explores the evolution of discussions about “long-tail” scientific data in the scholarly literature. The “long-tail” concept, originally used to explain trends in digital consumer goods, was first applied to scientific data in 2007 to refer to a vast array of smaller, heterogeneous data collections that cumulatively represent a substantial portion of scientific knowledge. However, these datasets, often referred to as “long-tail data,” are frequently mismanaged or overlooked due to inadequate data management practices and institutional support. This paper examines the changing landscape of discussions about long-tail data over time, situated within broader ecosystems of research data management and the natural interplay between “big” and “small” data. The review also bridges discussions on data curation in Library & Information Science (LIS) and domain-specific contexts, contributing to a more comprehensive understanding of the long-tail concept's utility for effective data management outcomes. The review aims to provide a more comprehensive understanding of this concept, its terminological diversity in the literature, and its utility for guiding data management, overall informing current and future information science research and practice.</p>","PeriodicalId":48810,"journal":{"name":"Journal of the Association for Information Science and Technology","volume":"77 1","pages":"3-22"},"PeriodicalIF":4.3,"publicationDate":"2024-12-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146007447","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
When using retrieval-augmented generation (RAG) to handle multi-document question answering (MDQA) tasks, it is beneficial to decompose complex queries into multiple simpler ones to enhance retrieval results. However, previous strategies always employ a one-shot approach of question decomposition, overlooking subquestions dependency problem and failing to ensure that the derived subqueries are single-hop. To overcome this challenge, we introduce a novel framework called DSRC-QCS. Decompose-solve-renewal-cycle (DSRC) is an iterative multi-hop question processing module. The key idea of DSRC involves using a unique symbol to achieve hierarchical dependency management and employing a cyclical process of question decomposition, solving, and renewal to continuously generate and resolve all single-hop subquestions. Query-chain selector (QCS) functions as a voting mechanism that effectively utilizes the reasoning process of DSRC to assess and select solutions. We compare DSRC-QCS against five RAG approaches across three datasets and three LLMs. DSRC-QCS demonstrates superior performance. Compared to the Direct Retrieval method, DSRC-QCS improves the average F1 score by 17.36% with Alpaca-7b, 10.83% with LLaMa2-Chat-7b, and 11.88% with GPT-3.5-Turbo. We also conduct ablation studies to validate the performance of both DSRC and QCS and explore factors influencing the effectiveness of DSRC. We have included all prompts in the Appendix.
{"title":"Beyond decomposition: Hierarchical dependency management in multi-document question answering","authors":"Xiaoyan Zheng, Zhi Li, Qianglong Chen, Yin Zhang","doi":"10.1002/asi.24971","DOIUrl":"10.1002/asi.24971","url":null,"abstract":"<p>When using retrieval-augmented generation (RAG) to handle multi-document question answering (MDQA) tasks, it is beneficial to decompose complex queries into multiple simpler ones to enhance retrieval results. However, previous strategies always employ a one-shot approach of question decomposition, overlooking subquestions dependency problem and failing to ensure that the derived subqueries are single-hop. To overcome this challenge, we introduce a novel framework called DSRC-QCS. Decompose-solve-renewal-cycle (DSRC) is an iterative multi-hop question processing module. The key idea of DSRC involves using a unique symbol to achieve hierarchical dependency management and employing a cyclical process of question decomposition, solving, and renewal to continuously generate and resolve all single-hop subquestions. Query-chain selector (QCS) functions as a voting mechanism that effectively utilizes the reasoning process of DSRC to assess and select solutions. We compare DSRC-QCS against five RAG approaches across three datasets and three LLMs. DSRC-QCS demonstrates superior performance. Compared to the Direct Retrieval method, DSRC-QCS improves the average F1 score by 17.36% with Alpaca-7b, 10.83% with LLaMa2-Chat-7b, and 11.88% with GPT-3.5-Turbo. We also conduct ablation studies to validate the performance of both DSRC and QCS and explore factors influencing the effectiveness of DSRC. We have included all prompts in the Appendix.</p>","PeriodicalId":48810,"journal":{"name":"Journal of the Association for Information Science and Technology","volume":"76 5","pages":"770-789"},"PeriodicalIF":4.3,"publicationDate":"2024-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143801466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As the widespread use of algorithms and artificial intelligence (AI) technologies, understanding the interaction process of human–algorithm interaction becomes increasingly crucial. From the human perspective, algorithmic awareness is recognized as a significant factor influencing how users evaluate algorithms and engage with them. In this study, a formative study identified four dimensions of algorithmic awareness: conceptions awareness (AC), data awareness (AD), functions awareness (AF), and risks awareness (AR). Subsequently, we implemented a heuristic intervention and collected data on users' algorithmic awareness and FAT (fairness, accountability, and transparency) evaluation in both pre-test and post-test stages (N = 622). We verified the dynamics of algorithmic awareness and FAT evaluation through fuzzy clustering and identified three patterns of FAT evaluation changes: “Stable high rating pattern,” “Variable medium rating pattern,” and “Unstable low rating pattern.” Using the clustering results and FAT evaluation scores, we trained classification models to predict different dimensions of algorithmic awareness by applying different machine learning techniques, namely Logistic Regression (LR), Random Forest (RF), Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), and XGBoost (XGB). Comparatively, experimental results show that the SVM algorithm accomplishes the task of predicting the four dimensions of algorithmic awareness with better results and interpretability. Its F1 scores are 0.6377, 0.6780, 0.6747, and 0.75. These findings hold great potential for informing human-centered algorithmic practices and HCI design.
{"title":"Dynamic algorithmic awareness based on FAT evaluation: Heuristic intervention and multidimensional prediction","authors":"Jing Liu, Dan Wu, Guoye Sun, Yuyang Deng","doi":"10.1002/asi.24969","DOIUrl":"10.1002/asi.24969","url":null,"abstract":"<p>As the widespread use of algorithms and artificial intelligence (AI) technologies, understanding the interaction process of human–algorithm interaction becomes increasingly crucial. From the human perspective, algorithmic awareness is recognized as a significant factor influencing how users evaluate algorithms and engage with them. In this study, a formative study identified four dimensions of algorithmic awareness: conceptions awareness (AC), data awareness (AD), functions awareness (AF), and risks awareness (AR). Subsequently, we implemented a heuristic intervention and collected data on users' algorithmic awareness and FAT (fairness, accountability, and transparency) evaluation in both pre-test and post-test stages (<i>N</i> = 622). We verified the dynamics of algorithmic awareness and FAT evaluation through fuzzy clustering and identified three patterns of FAT evaluation changes: “Stable high rating pattern,” “Variable medium rating pattern,” and “Unstable low rating pattern.” Using the clustering results and FAT evaluation scores, we trained classification models to predict different dimensions of algorithmic awareness by applying different machine learning techniques, namely Logistic Regression (LR), Random Forest (RF), Support Vector Machine (SVM), Linear Discriminant Analysis (LDA), and XGBoost (XGB). Comparatively, experimental results show that the SVM algorithm accomplishes the task of predicting the four dimensions of algorithmic awareness with better results and interpretability. Its F1 scores are 0.6377, 0.6780, 0.6747, and 0.75. These findings hold great potential for informing human-centered algorithmic practices and HCI design.</p>","PeriodicalId":48810,"journal":{"name":"Journal of the Association for Information Science and Technology","volume":"76 4","pages":"718-739"},"PeriodicalIF":4.3,"publicationDate":"2024-12-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143622598","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Carolina Pradier, Diego Kozlowski, Natsumi S. Shokida, Vincent Larivière
The Latin-American scientific community has achieved significant progress towards gender parity, with nearly equal representation of women and men scientists. Nevertheless, women continue to be underrepresented in scholarly communication. Throughout the 20th century, Latin America established its academic circuit, focusing on research topics of regional significance. Through an analysis of scientific publications, this article explores the relationship between gender inequalities in science and the integration of Latin-American researchers into the regional and global academic circuits between 1993 and 2022. We find that women are more likely to engage in the regional circuit, while men are more active within the global circuit. This trend is attributed to a thematic alignment between women's research interests and issues specific to Latin America. Furthermore, our results reveal that the mechanisms contributing to gender differences in symbolic capital accumulation vary between circuits. Women's work achieves equal or greater recognition compared to men's within the regional circuit, but generally garners less attention in the global circuit. Our findings suggest that policies aimed at strengthening the regional academic circuit would encourage scientists to address locally relevant topics while simultaneously fostering gender equality in science.
{"title":"Science for whom? The influence of the regional academic circuit on gender inequalities in Latin America","authors":"Carolina Pradier, Diego Kozlowski, Natsumi S. Shokida, Vincent Larivière","doi":"10.1002/asi.24972","DOIUrl":"10.1002/asi.24972","url":null,"abstract":"<p>The Latin-American scientific community has achieved significant progress towards gender parity, with nearly equal representation of women and men scientists. Nevertheless, women continue to be underrepresented in scholarly communication. Throughout the 20th century, Latin America established its academic circuit, focusing on research topics of regional significance. Through an analysis of scientific publications, this article explores the relationship between gender inequalities in science and the integration of Latin-American researchers into the regional and global academic circuits between 1993 and 2022. We find that women are more likely to engage in the regional circuit, while men are more active within the global circuit. This trend is attributed to a thematic alignment between women's research interests and issues specific to Latin America. Furthermore, our results reveal that the mechanisms contributing to gender differences in symbolic capital accumulation vary between circuits. Women's work achieves equal or greater recognition compared to men's within the regional circuit, but generally garners less attention in the global circuit. Our findings suggest that policies aimed at strengthening the regional academic circuit would encourage scientists to address locally relevant topics while simultaneously fostering gender equality in science.</p>","PeriodicalId":48810,"journal":{"name":"Journal of the Association for Information Science and Technology","volume":"76 5","pages":"790-802"},"PeriodicalIF":4.3,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asi.24972","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143801322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sohail Ahmed Khan, Laurence Dierickx, Jan-Gunnar Furuly, Henrik Brattli Vold, Rano Tahseen, Carl-Gustav Linden, Duc-Tien Dang-Nguyen
This paper investigates the use of multimedia verification, in particular, computational tools and Open-source Intelligence (OSINT) methods, for verifying online multimedia content in the context of the ongoing wars in Ukraine and Gaza. Our study examines the workflows and tools used by several fact-checkers and journalists working at Faktisk, a Norwegian fact-checking organization. Our study showcases the effectiveness of diverse resources, including AI tools, geolocation tools, internet archives, and social media monitoring platforms, in enabling journalists and fact-checkers to efficiently process and corroborate evidence, ensuring the dissemination of accurate information. This research provides an in-depth analysis of the role of computational tools and OSINT methods for multimedia verification. It also underscores the potentials of currently available technology, and highlights its limitations while providing guidance for future development of digital multimedia verification tools and frameworks.
{"title":"Debunking war information disorder: A case study in assessing the use of multimedia verification tools","authors":"Sohail Ahmed Khan, Laurence Dierickx, Jan-Gunnar Furuly, Henrik Brattli Vold, Rano Tahseen, Carl-Gustav Linden, Duc-Tien Dang-Nguyen","doi":"10.1002/asi.24970","DOIUrl":"10.1002/asi.24970","url":null,"abstract":"<p>This paper investigates the use of multimedia verification, in particular, computational tools and Open-source Intelligence (OSINT) methods, for verifying online multimedia content in the context of the ongoing wars in Ukraine and Gaza. Our study examines the workflows and tools used by several fact-checkers and journalists working at Faktisk, a Norwegian fact-checking organization. Our study showcases the effectiveness of diverse resources, including AI tools, geolocation tools, internet archives, and social media monitoring platforms, in enabling journalists and fact-checkers to efficiently process and corroborate evidence, ensuring the dissemination of accurate information. This research provides an in-depth analysis of the role of computational tools and OSINT methods for multimedia verification. It also underscores the potentials of currently available technology, and highlights its limitations while providing guidance for future development of digital multimedia verification tools and frameworks.</p>","PeriodicalId":48810,"journal":{"name":"Journal of the Association for Information Science and Technology","volume":"76 5","pages":"752-769"},"PeriodicalIF":4.3,"publicationDate":"2024-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asi.24970","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143801360","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Irene V. Pasquetto, Amina A. Abdu, Natascha Chtena
In this paper, we examine the role digital curation practices and practitioners played in facilitating open science (OS) initiatives amid the COVID-19 pandemic. In Summer 2023, we conducted a content analysis of available information regarding 50 OS initiatives that emerged—or substantially shifted their focus—between 2020 and 2022 to address COVID-19 related challenges. Despite growing recognition of the value of digital curation for the organization, dissemination, and preservation of scientific knowledge, our study reveals that digital curatorial work often remains invisible in pandemic OS initiatives. In particular, we find that, even among those initiatives that greatly invested in digital curation work, digital curation is seldom mentioned in mission statements, and little is known about the rationales behind curatorial choices and the individuals responsible for the implementation of curatorial strategies. Given the important yet persistent invisibility of digital curatorial work, we propose a shift in how we conceptualize digital curation from a practice that merely “adds value” to research outputs to a practice of knowledge production. We conclude with reflections on how iSchools can lead in professionalizing the field and offer suggestions for initial steps in that direction.
{"title":"Essential work, invisible workers: The role of digital curation in COVID-19 Open Science","authors":"Irene V. Pasquetto, Amina A. Abdu, Natascha Chtena","doi":"10.1002/asi.24965","DOIUrl":"10.1002/asi.24965","url":null,"abstract":"<p>In this paper, we examine the role digital curation practices and practitioners played in facilitating open science (OS) initiatives amid the COVID-19 pandemic. In Summer 2023, we conducted a content analysis of available information regarding 50 OS initiatives that emerged—or substantially shifted their focus—between 2020 and 2022 to address COVID-19 related challenges. Despite growing recognition of the value of digital curation for the organization, dissemination, and preservation of scientific knowledge, our study reveals that digital curatorial work often remains invisible in pandemic OS initiatives. In particular, we find that, even among those initiatives that greatly invested in digital curation work, digital curation is seldom mentioned in mission statements, and little is known about the rationales behind curatorial choices and the individuals responsible for the implementation of curatorial strategies. Given the important yet persistent invisibility of digital curatorial work, we propose a shift in how we conceptualize digital curation from a practice that merely “adds value” to research outputs to a practice of knowledge production. We conclude with reflections on how iSchools can lead in professionalizing the field and offer suggestions for initial steps in that direction.</p>","PeriodicalId":48810,"journal":{"name":"Journal of the Association for Information Science and Technology","volume":"76 4","pages":"703-717"},"PeriodicalIF":4.3,"publicationDate":"2024-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asi.24965","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143622559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Information avoidance has long been in the shadow of information seeking. Variously seen as undesired, maladaptive, or even pathological, information avoidance has lacked the sustained attention and conceptualization that has been provided to other information practices. It is also, perhaps uniquely among information practices, often invoked to blame or censure those who engage in it. However, closer examination of information avoidance reveals nuanced and complex patterns of interactions with information, ones that often have positive and beneficial outcomes. We challenge the simplistic tenor of this conversation through this critical conceptual review of information avoidance. Starting from an examination of how information avoidance has been treated within information science and related disciplines, we then draw upon the various terms that have been used to describe a lack of engagement with information to establish seven core characteristics of the concept. We subsequently use this analysis to establish our definition of information avoidance as practices that moderate interaction with information by reducing the intensity of information, restricting control over information, and/or excluding information based on perceived properties. We consider the implications of this definition and its view of information avoidance as a significant information practice on information research.
{"title":"Information avoidance: A critical conceptual review. An Annual Review of Information Science and Technology (ARIST) paper","authors":"Alison Hicks, Pamela McKenzie, Jenny Bronstein, Jette Seiden Hyldegård, Ian Ruthven, Gunilla Widén","doi":"10.1002/asi.24968","DOIUrl":"10.1002/asi.24968","url":null,"abstract":"<p>Information avoidance has long been in the shadow of information seeking. Variously seen as undesired, maladaptive, or even pathological, information avoidance has lacked the sustained attention and conceptualization that has been provided to other information practices. It is also, perhaps uniquely among information practices, often invoked to blame or censure those who engage in it. However, closer examination of information avoidance reveals nuanced and complex patterns of interactions with information, ones that often have positive and beneficial outcomes. We challenge the simplistic tenor of this conversation through this critical conceptual review of information avoidance. Starting from an examination of how information avoidance has been treated within information science and related disciplines, we then draw upon the various terms that have been used to describe a lack of engagement with information to establish seven core characteristics of the concept. We subsequently use this analysis to establish our definition of information avoidance as practices that moderate interaction with information by reducing the intensity of information, restricting control over information, and/or excluding information based on perceived properties. We consider the implications of this definition and its view of information avoidance as a significant information practice on information research.</p>","PeriodicalId":48810,"journal":{"name":"Journal of the Association for Information Science and Technology","volume":"76 1","pages":"326-346"},"PeriodicalIF":4.3,"publicationDate":"2024-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asi.24968","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143117897","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In today's linguistically diverse world, managing personal information across multiple languages presents a challenge. This study engaged 16 multilingual participants to explore their user experience in the context of multilingual personal information management (MPIM), with a focus on inclusivity, universality, and equity. Addressing two main questions, the study explores the challenges users face on digital platforms in MPIM contexts and their ideal platform features. Findings highlight key issues in MPIM platform design, including unsupported languages and integration of visual aesthetics. We also identify user preferences for ideal platform features, such as language flexibility and efficient information retrieval. The study suggests the need for more inclusive, universal, and equitable platform designs that cater to the specific requirements of multilingual users. Ultimately, this study underscores the critical need for improved MPIM support and emphasizes the significance of continued exploration in this area, establishing it as a vital field of future research.
{"title":"“I wish I could use any language as it comes to mind”: User experience in digital platforms in the context of multilingual personal information management","authors":"Lilach Alon, Maja Krtalić","doi":"10.1002/asi.24964","DOIUrl":"10.1002/asi.24964","url":null,"abstract":"<p>In today's linguistically diverse world, managing personal information across multiple languages presents a challenge. This study engaged 16 multilingual participants to explore their user experience in the context of multilingual personal information management (MPIM), with a focus on inclusivity, universality, and equity. Addressing two main questions, the study explores the challenges users face on digital platforms in MPIM contexts and their ideal platform features. Findings highlight key issues in MPIM platform design, including unsupported languages and integration of visual aesthetics. We also identify user preferences for ideal platform features, such as language flexibility and efficient information retrieval. The study suggests the need for more inclusive, universal, and equitable platform designs that cater to the specific requirements of multilingual users. Ultimately, this study underscores the critical need for improved MPIM support and emphasizes the significance of continued exploration in this area, establishing it as a vital field of future research.</p>","PeriodicalId":48810,"journal":{"name":"Journal of the Association for Information Science and Technology","volume":"76 4","pages":"686-702"},"PeriodicalIF":4.3,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143622279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
ChatGPT and other large language models (LLMs) have been successful at natural and computer language processing tasks with varying degrees of complexity. This brief communication summarizes the lessons learned from a series of investigations into its use for the complex text analysis task of research quality evaluation. In summary, ChatGPT is very good at understanding and carrying out complex text processing tasks in the sense of producing plausible responses with minimum input from the researcher. Nevertheless, its outputs require systematic testing to assess their value because they can be misleading. In contrast to simple tasks, the outputs from complex tasks are highly varied and better results can be obtained by repeating the prompts multiple times in different sessions and averaging the ChatGPT outputs. Varying ChatGPT's configuration parameters from their defaults does not seem to be useful, except for the length of the output requested.
{"title":"ChatGPT for complex text evaluation tasks","authors":"Mike Thelwall","doi":"10.1002/asi.24966","DOIUrl":"10.1002/asi.24966","url":null,"abstract":"<p>ChatGPT and other large language models (LLMs) have been successful at natural and computer language processing tasks with varying degrees of complexity. This brief communication summarizes the lessons learned from a series of investigations into its use for the complex text analysis task of research quality evaluation. In summary, ChatGPT is very good at understanding and carrying out complex text processing tasks in the sense of producing plausible responses with minimum input from the researcher. Nevertheless, its outputs require systematic testing to assess their value because they can be misleading. In contrast to simple tasks, the outputs from complex tasks are highly varied and better results can be obtained by repeating the prompts multiple times in different sessions and averaging the ChatGPT outputs. Varying ChatGPT's configuration parameters from their defaults does not seem to be useful, except for the length of the output requested.</p>","PeriodicalId":48810,"journal":{"name":"Journal of the Association for Information Science and Technology","volume":"76 4","pages":"645-648"},"PeriodicalIF":4.3,"publicationDate":"2024-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/asi.24966","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143622691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}