Pub Date : 2021-07-03DOI: 10.1080/13614568.2021.1950392
Bunty Avieson, F. DiLauro
{"title":"Special issue on the worlds of Wikipedia","authors":"Bunty Avieson, F. DiLauro","doi":"10.1080/13614568.2021.1950392","DOIUrl":"https://doi.org/10.1080/13614568.2021.1950392","url":null,"abstract":"","PeriodicalId":54386,"journal":{"name":"New Review of Hypermedia and Multimedia","volume":"27 1","pages":"205 - 206"},"PeriodicalIF":1.2,"publicationDate":"2021-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43878134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-05-11DOI: 10.1080/13614568.2021.1889693
Stacey Mason, Mark Bernstein
ABSTRACT
Links are the most important new punctuation mark since the invention of the comma, but it has been years since the last in-depth discussions of link poetics. Taking inspiration Raymond Queneau's Exercices De Style, we explore the poetics of contemporary link usage by offering exercises in which the same piece of text is divided and linked in different ways. We present three different exercises—varying the division of a text into lexia, varying links among lexia, and varying links within lexia—while pointing toward potential aesthetic considerations of each variation. Our exercises are intended descriptively, not prescriptively, as a conversational starting point for analysis and as a compendium of useful techniques upon which artists might build.
{"title":"On links: exercises in style","authors":"Stacey Mason, Mark Bernstein","doi":"10.1080/13614568.2021.1889693","DOIUrl":"https://doi.org/10.1080/13614568.2021.1889693","url":null,"abstract":"<p><b>ABSTRACT</b></p><p>Links are the most important new punctuation mark since the invention of the comma, but it has been years since the last in-depth discussions of link poetics. Taking inspiration Raymond Queneau's <i>Exercices De Style</i>, we explore the poetics of contemporary link usage by offering exercises in which the same piece of text is divided and linked in different ways. We present three different exercises—varying the division of a text into lexia, varying links among lexia, and varying links within lexia—while pointing toward potential aesthetic considerations of each variation. Our exercises are intended descriptively, not prescriptively, as a conversational starting point for analysis and as a compendium of useful techniques upon which artists might build.</p>","PeriodicalId":54386,"journal":{"name":"New Review of Hypermedia and Multimedia","volume":" 22","pages":""},"PeriodicalIF":1.2,"publicationDate":"2021-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138492376","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-10DOI: 10.1080/13614568.2021.1900924
Liam Wyatt
ABSTRACT Wikipedia is by definition an encyclopedia, and the universal scope and availability it promises are ideals-in the pursuit of worldwide access to information. The history of literary production is equally the history of censorship, knowledge suppression, preservation, and material circulation. While widely accessed online sources might appear to have moved beyond these issues, they are in fact part of this complex balance between freedom and restriction. Therefore, it is useful to consider Wikipedia in terms other than as a website-as a library, as a dictionary, as an archive, as a book. In this light, we see that Wikipedia has many precedents in the history of knowledge dissemination and preservation, precedents as diverse as the Library of Alexandria, the Oxford English Dictionary or the Bible. Wikipedia is so different from what has gone before in any one field but so similar to what has happened in different aspects of many fields. This paper discusses how the idea of “free” is related to the production and dissemination of knowledge by looking at methods by which knowledge has historically been curtailed-through copyright; censorship; destruction; price; and language. Wikipedia is the latest in a long line of defenders of the ideal of free knowledge.
{"title":"Gratis & Libre: Wikipedia’s role in free and open history production and dissemination","authors":"Liam Wyatt","doi":"10.1080/13614568.2021.1900924","DOIUrl":"https://doi.org/10.1080/13614568.2021.1900924","url":null,"abstract":"ABSTRACT Wikipedia is by definition an encyclopedia, and the universal scope and availability it promises are ideals-in the pursuit of worldwide access to information. The history of literary production is equally the history of censorship, knowledge suppression, preservation, and material circulation. While widely accessed online sources might appear to have moved beyond these issues, they are in fact part of this complex balance between freedom and restriction. Therefore, it is useful to consider Wikipedia in terms other than as a website-as a library, as a dictionary, as an archive, as a book. In this light, we see that Wikipedia has many precedents in the history of knowledge dissemination and preservation, precedents as diverse as the Library of Alexandria, the Oxford English Dictionary or the Bible. Wikipedia is so different from what has gone before in any one field but so similar to what has happened in different aspects of many fields. This paper discusses how the idea of “free” is related to the production and dissemination of knowledge by looking at methods by which knowledge has historically been curtailed-through copyright; censorship; destruction; price; and language. Wikipedia is the latest in a long line of defenders of the ideal of free knowledge.","PeriodicalId":54386,"journal":{"name":"New Review of Hypermedia and Multimedia","volume":"27 1","pages":"260 - 274"},"PeriodicalIF":1.2,"publicationDate":"2021-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/13614568.2021.1900924","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"49447942","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-03DOI: 10.1080/13614568.2021.1943283
Claus Atzenbeck, J. Rubart, D. Millard
Many hypertext publications mention Vannevar Bush’s Memex (Bush, 1945) as one of the original ideas of hypertext. Memex is an acronym for Memory Extender. One of its core features is to store “trails” of thoughts persistently over documents such that a user may follow them at a later point of time. Bush only described Memex, but never built it physically. This was the time before the rise of digital computers, and Bush’s machine was a mechanical device built around documents stored on microfiche. Bush’s ideas were been taken up again in the 1960s by hypertext pioneers such as Douglas Engelbart, Theodor Nelson, or Andries van Dam. Computers, although expensive, were already available at that time, making hypertext as software systems possible. This was a necessary prerequisite for further developments in the field. In fact, Nelson, who coined the term hypertext realised their necessity: “Let me introduce the word ‘hypertext’ to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” (Nelson, 1965) At that point of time, the focus in the field was primarily on nodes interconnected by links. Discussions took place mainly among academics—the industry was not yet broadly interested. The situation changed with the rise of personal computers, which were affordable by ordinary people and organisations, and by the 1980s, several hypertext applications have been developed by academics and software companies. It was a time with many competing hypertext approaches. For example, Eastgate Systems released Storyspace (Bernstein, 2002; Joyce, 1991), a hypertext system that offers a 2D space for writing hypertext fiction; Brown University developed Intermedia with the promise to provide link creation mechanisms that would be as easy as copy & paste (Meyrowitz, 1986, 1989); and the hypertext system Guide (Brown, 1987) was one of the first cross-platform hypertext applications that ran on Macintosh and Windows PCs. There were many other
{"title":"Special issue of HT'19 selected papers","authors":"Claus Atzenbeck, J. Rubart, D. Millard","doi":"10.1080/13614568.2021.1943283","DOIUrl":"https://doi.org/10.1080/13614568.2021.1943283","url":null,"abstract":"Many hypertext publications mention Vannevar Bush’s Memex (Bush, 1945) as one of the original ideas of hypertext. Memex is an acronym for Memory Extender. One of its core features is to store “trails” of thoughts persistently over documents such that a user may follow them at a later point of time. Bush only described Memex, but never built it physically. This was the time before the rise of digital computers, and Bush’s machine was a mechanical device built around documents stored on microfiche. Bush’s ideas were been taken up again in the 1960s by hypertext pioneers such as Douglas Engelbart, Theodor Nelson, or Andries van Dam. Computers, although expensive, were already available at that time, making hypertext as software systems possible. This was a necessary prerequisite for further developments in the field. In fact, Nelson, who coined the term hypertext realised their necessity: “Let me introduce the word ‘hypertext’ to mean a body of written or pictorial material interconnected in such a complex way that it could not conveniently be presented or represented on paper.” (Nelson, 1965) At that point of time, the focus in the field was primarily on nodes interconnected by links. Discussions took place mainly among academics—the industry was not yet broadly interested. The situation changed with the rise of personal computers, which were affordable by ordinary people and organisations, and by the 1980s, several hypertext applications have been developed by academics and software companies. It was a time with many competing hypertext approaches. For example, Eastgate Systems released Storyspace (Bernstein, 2002; Joyce, 1991), a hypertext system that offers a 2D space for writing hypertext fiction; Brown University developed Intermedia with the promise to provide link creation mechanisms that would be as easy as copy & paste (Meyrowitz, 1986, 1989); and the hypertext system Guide (Brown, 1987) was one of the first cross-platform hypertext applications that ran on Macintosh and Windows PCs. There were many other","PeriodicalId":54386,"journal":{"name":"New Review of Hypermedia and Multimedia","volume":"27 1","pages":"1 - 5"},"PeriodicalIF":1.2,"publicationDate":"2021-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43445210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-03DOI: 10.1080/13614568.2021.1942237
Claus Atzenbeck, Peter J. Nürnberg, Daniel Roßner
ABSTRACT Historically, there has been a tendency to consider hypertext as a type of system, perhaps characterised by provision of links or other structure to users. In this article, we consider hypertext as a method of inquiry, a way of viewing arbitrary systems. In this view, what are traditionally called “navigational hypertext systems” might be considered as information retrieval systems. This opens the hypertext field to various other types of systems that traditionally would not be considered as part of the field. The change of view enables a deeper fusion of human and machine. In particular, today's AI-based, intelligent systems open the demand of synthesising automation (on the machine's side) and augmentation (on the user's side). This article is not about researching AI systems; it is about extending the view of hypertext systems to synthesise augmentation and automation. We specifically apply this view to intelligent systems, asking the question about how hypertext can act as a common medium between human and machine, particularly for knowledge intensive tasks. We propose spatial hypertext as a medium that enables users to create cognitive maps. Along these lines, we provide examples from multiple projects and examine how these applications can be productive.
{"title":"Synthesising augmentation and automation","authors":"Claus Atzenbeck, Peter J. Nürnberg, Daniel Roßner","doi":"10.1080/13614568.2021.1942237","DOIUrl":"https://doi.org/10.1080/13614568.2021.1942237","url":null,"abstract":"ABSTRACT Historically, there has been a tendency to consider hypertext as a type of system, perhaps characterised by provision of links or other structure to users. In this article, we consider hypertext as a method of inquiry, a way of viewing arbitrary systems. In this view, what are traditionally called “navigational hypertext systems” might be considered as information retrieval systems. This opens the hypertext field to various other types of systems that traditionally would not be considered as part of the field. The change of view enables a deeper fusion of human and machine. In particular, today's AI-based, intelligent systems open the demand of synthesising automation (on the machine's side) and augmentation (on the user's side). This article is not about researching AI systems; it is about extending the view of hypertext systems to synthesise augmentation and automation. We specifically apply this view to intelligent systems, asking the question about how hypertext can act as a common medium between human and machine, particularly for knowledge intensive tasks. We propose spatial hypertext as a medium that enables users to create cognitive maps. Along these lines, we provide examples from multiple projects and examine how these applications can be productive.","PeriodicalId":54386,"journal":{"name":"New Review of Hypermedia and Multimedia","volume":"27 1","pages":"177 - 203"},"PeriodicalIF":1.2,"publicationDate":"2021-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/13614568.2021.1942237","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48842247","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1080/13614568.2021.1906955
Samuel Brooker
ABSTRACT Hypertext has been described as embodying Roland Barthes' ideal text. This paper considers that association, and the relationship of each to literary theory's historical privileging of authorial intention over reader interpretation. Firstly it outlines the rise and fall of authorial intention in literary theory, culminating in Roland Barthes' 1967 essay The Death of the Author. Secondly, it challenges the relationship between anti-intentionalism and hypertext in three ways: by exploring hypertext as a dialectical situation, which places the reader in dialogue with the author; by challenging Barthes' galaxy of signifiers as an embodiment of links; and finally, by establishing a disciplinary emphasis on hermeneutics as intrinsically readerly in nature. The paper concludes by considering whether an intentionalist approach might in fact be the best fit for hypertext fiction.
{"title":"Proposing, disposing, proving: Barthes, intentionalism, and hypertext literary fiction","authors":"Samuel Brooker","doi":"10.1080/13614568.2021.1906955","DOIUrl":"https://doi.org/10.1080/13614568.2021.1906955","url":null,"abstract":"ABSTRACT Hypertext has been described as embodying Roland Barthes' ideal text. This paper considers that association, and the relationship of each to literary theory's historical privileging of authorial intention over reader interpretation. Firstly it outlines the rise and fall of authorial intention in literary theory, culminating in Roland Barthes' 1967 essay The Death of the Author. Secondly, it challenges the relationship between anti-intentionalism and hypertext in three ways: by exploring hypertext as a dialectical situation, which places the reader in dialogue with the author; by challenging Barthes' galaxy of signifiers as an embodiment of links; and finally, by establishing a disciplinary emphasis on hermeneutics as intrinsically readerly in nature. The paper concludes by considering whether an intentionalist approach might in fact be the best fit for hypertext fiction.","PeriodicalId":54386,"journal":{"name":"New Review of Hypermedia and Multimedia","volume":"27 1","pages":"6 - 28"},"PeriodicalIF":1.2,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/13614568.2021.1906955","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46539511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-04-01DOI: 10.1080/13614568.2021.1900925
Robert E. Cummings
ABSTRACT This article defines the concept of open recognition and places its development within the context of the development of Wikipedia. The potential impact of open recognition on higher education is explored. This article defines Open Recognition as consisting of three elements a philosophy, a framework, and a practice. Open recognition has the potential to fundamentally alter higher education by lowering costs of reporting learner knowledge, skills, and abilities, while altering the scope of recognition to add informal recognition. Both open recognition and Wikipedia share common features and can both be categorised as open knowledge movements. However, in order to succeed as a robust network, Wikipedia had to overcome scepticism and public distrust around reporting accurate and relevant knowledge, partly by making its writing around knowledge formation visible. This article observes how Wikipedia overcame these obstacles and demonstrates how a fully mature and robust open recognition framework can create more durable experiences to connect learners to employers and the public.
{"title":"Wikipedia and open recognition: writing the future of work","authors":"Robert E. Cummings","doi":"10.1080/13614568.2021.1900925","DOIUrl":"https://doi.org/10.1080/13614568.2021.1900925","url":null,"abstract":"ABSTRACT This article defines the concept of open recognition and places its development within the context of the development of Wikipedia. The potential impact of open recognition on higher education is explored. This article defines Open Recognition as consisting of three elements a philosophy, a framework, and a practice. Open recognition has the potential to fundamentally alter higher education by lowering costs of reporting learner knowledge, skills, and abilities, while altering the scope of recognition to add informal recognition. Both open recognition and Wikipedia share common features and can both be categorised as open knowledge movements. However, in order to succeed as a robust network, Wikipedia had to overcome scepticism and public distrust around reporting accurate and relevant knowledge, partly by making its writing around knowledge formation visible. This article observes how Wikipedia overcame these obstacles and demonstrates how a fully mature and robust open recognition framework can create more durable experiences to connect learners to employers and the public.","PeriodicalId":54386,"journal":{"name":"New Review of Hypermedia and Multimedia","volume":"27 1","pages":"229 - 244"},"PeriodicalIF":1.2,"publicationDate":"2021-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/13614568.2021.1900925","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43466163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-28DOI: 10.1080/13614568.2021.1889692
Isaac Alpizar Chacon, Sergey Sosnovsky
ABSTRACT Textbooks are educational documents created, structured and formatted by domain experts with the primary purpose to explain the knowledge in the domain to a novice. Authors use their understanding of the domain when structuring and formatting the content of a textbook to facilitate this explanation. As a result, the formatting and structural elements of textbooks carry the elements of domain knowledge implicitly encoded by their authors. Our paper presents an extensible approach towards automated extraction of knowledge models from textbooks and enrichment of their content with additional links (both internal and external). The textbooks themselves essentially become hypertext documents where individual pages are annotated with important concepts in the domain. The evaluation experiments examine several aspects and stages of the approach, including the accuracy of model extraction, the pragmatic quality of extracted models using one of their possible applications— semantic linking of textbooks in the same domain, the accuracy of linking models to external knowledge sources and the effect of integration of multiple textbooks from the same domain. The results indicate high accuracy of model extraction on symbolic, syntactic and structural levels across textbooks and domains, and demonstrate the added value of the extracted models on the semantic level.
{"title":"Knowledge models from PDF textbooks","authors":"Isaac Alpizar Chacon, Sergey Sosnovsky","doi":"10.1080/13614568.2021.1889692","DOIUrl":"https://doi.org/10.1080/13614568.2021.1889692","url":null,"abstract":"ABSTRACT Textbooks are educational documents created, structured and formatted by domain experts with the primary purpose to explain the knowledge in the domain to a novice. Authors use their understanding of the domain when structuring and formatting the content of a textbook to facilitate this explanation. As a result, the formatting and structural elements of textbooks carry the elements of domain knowledge implicitly encoded by their authors. Our paper presents an extensible approach towards automated extraction of knowledge models from textbooks and enrichment of their content with additional links (both internal and external). The textbooks themselves essentially become hypertext documents where individual pages are annotated with important concepts in the domain. The evaluation experiments examine several aspects and stages of the approach, including the accuracy of model extraction, the pragmatic quality of extracted models using one of their possible applications— semantic linking of textbooks in the same domain, the accuracy of linking models to external knowledge sources and the effect of integration of multiple textbooks from the same domain. The results indicate high accuracy of model extraction on symbolic, syntactic and structural levels across textbooks and domains, and demonstrate the added value of the extracted models on the semantic level.","PeriodicalId":54386,"journal":{"name":"New Review of Hypermedia and Multimedia","volume":"27 1","pages":"128 - 176"},"PeriodicalIF":1.2,"publicationDate":"2021-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/13614568.2021.1889692","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48649241","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-28DOI: 10.1080/13614568.2021.1889690
J. Wobbrock, Lara Hattatoglu, Anya K. Hsu, Marijn A. Burger, Michael J. Magee
ABSTRACT Credibility judgments of online news are affected greatly by perceived expertise and trustworthiness, but users encounter an article’s visual appearance before its content, and yet visual appearance has not been studied in isolation. We conduct two studies of news article visual appearance. The first was with 31 undergraduates who rated the credibility of synthetic newslike articles containing only “lorem ipsum” text, indistinct videos and images, non-functional hyperlinks, and various fonts. The second study was with 30 different university students who rated the credibility of news articles from popular web outlets, half credible and half not. The articles were presented at 5600 words per minute, or 20 times faster than typical reading speeds, enabling only judgments of appearance, not substance. Findings show that credibility is affected by article length, image count and density, and font face and size. These factors interact to yield differential effects on perceived credibility. Articles that struck a balance among factors were most credible, giving rise to the notion of a “Goldilocks zone”, where credibility is highest. Interviews from both studies also revealed that perceived credibility was highest for articles that struck a balance among factors. This work has implications for visual information design, especially for online news.
{"title":"The Goldilocks zone: young adults’ credibility perceptions of online news articles based on visual appearance","authors":"J. Wobbrock, Lara Hattatoglu, Anya K. Hsu, Marijn A. Burger, Michael J. Magee","doi":"10.1080/13614568.2021.1889690","DOIUrl":"https://doi.org/10.1080/13614568.2021.1889690","url":null,"abstract":"ABSTRACT Credibility judgments of online news are affected greatly by perceived expertise and trustworthiness, but users encounter an article’s visual appearance before its content, and yet visual appearance has not been studied in isolation. We conduct two studies of news article visual appearance. The first was with 31 undergraduates who rated the credibility of synthetic newslike articles containing only “lorem ipsum” text, indistinct videos and images, non-functional hyperlinks, and various fonts. The second study was with 30 different university students who rated the credibility of news articles from popular web outlets, half credible and half not. The articles were presented at 5600 words per minute, or 20 times faster than typical reading speeds, enabling only judgments of appearance, not substance. Findings show that credibility is affected by article length, image count and density, and font face and size. These factors interact to yield differential effects on perceived credibility. Articles that struck a balance among factors were most credible, giving rise to the notion of a “Goldilocks zone”, where credibility is highest. Interviews from both studies also revealed that perceived credibility was highest for articles that struck a balance among factors. This work has implications for visual information design, especially for online news.","PeriodicalId":54386,"journal":{"name":"New Review of Hypermedia and Multimedia","volume":"27 1","pages":"51 - 96"},"PeriodicalIF":1.2,"publicationDate":"2021-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/13614568.2021.1889690","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48690451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2021-02-28DOI: 10.1080/13614568.2021.1889691
Jakub Simko, Patrik Racsko, M. Tomlein, Martina Hanakova, M. Bieliková
ABSTRACT The online spreading of fake news is a major issue threatening entire societies. Much of this spreading is enabled by new media formats, namely social networks and online media sites. Researchers and practitioners have been trying to answer this by characterising the fake news and devising automated methods for detecting them. The detection methods had so far only limited success, mostly due to the complexity of the news content and context and lack of properly annotated datasets. One possible way to boost the efficiency of automated misinformation detection methods is to imitate the detection work of humans. It is also important to understand the news consumption behaviour of online users. In this paper, we present an eye-tracking study, in which we let 44 lay participants to casually read through a social media feed containing posts with news articles, some of which were fake. In a second run, we asked the participants to decide on the truthfulness of these articles. We also describe a follow-up qualitative study with a similar scenario but this time with seven expert fake news annotators. We present the description of both studies, characteristics of the resulting dataset (which we hereby publish) and several findings.
{"title":"A study of fake news reading and annotating in social media context","authors":"Jakub Simko, Patrik Racsko, M. Tomlein, Martina Hanakova, M. Bieliková","doi":"10.1080/13614568.2021.1889691","DOIUrl":"https://doi.org/10.1080/13614568.2021.1889691","url":null,"abstract":"ABSTRACT The online spreading of fake news is a major issue threatening entire societies. Much of this spreading is enabled by new media formats, namely social networks and online media sites. Researchers and practitioners have been trying to answer this by characterising the fake news and devising automated methods for detecting them. The detection methods had so far only limited success, mostly due to the complexity of the news content and context and lack of properly annotated datasets. One possible way to boost the efficiency of automated misinformation detection methods is to imitate the detection work of humans. It is also important to understand the news consumption behaviour of online users. In this paper, we present an eye-tracking study, in which we let 44 lay participants to casually read through a social media feed containing posts with news articles, some of which were fake. In a second run, we asked the participants to decide on the truthfulness of these articles. We also describe a follow-up qualitative study with a similar scenario but this time with seven expert fake news annotators. We present the description of both studies, characteristics of the resulting dataset (which we hereby publish) and several findings.","PeriodicalId":54386,"journal":{"name":"New Review of Hypermedia and Multimedia","volume":"27 1","pages":"97 - 127"},"PeriodicalIF":1.2,"publicationDate":"2021-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://sci-hub-pdf.com/10.1080/13614568.2021.1889691","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46973016","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}