This paper starts with an introductory essay stating the issues and discussing the notion of metafiction. Then it continues in an online hypertext narrative demonstration of the interweaving of story and meta-story. The hypertext attempts to show in action how seemingly unified narratives and narrative voices are surrounded and influenced by other voices and meta-stories. No narrative is un-mediated and no narrative voice is alone. The hypertext concludes with some musings on the complexities of narrative reading and writing, also with counterpoint voices. Throughout, the text comments on issues about the reading and writing of hypertext narratives.
{"title":"Story/story","authors":"D. Kolb","doi":"10.1145/2309996.2310013","DOIUrl":"https://doi.org/10.1145/2309996.2310013","url":null,"abstract":"This paper starts with an introductory essay stating the issues and discussing the notion of metafiction. Then it continues in an online hypertext narrative demonstration of the interweaving of story and meta-story. The hypertext attempts to show in action how seemingly unified narratives and narrative voices are surrounded and influenced by other voices and meta-stories. No narrative is un-mediated and no narrative voice is alone. The hypertext concludes with some musings on the complexities of narrative reading and writing, also with counterpoint voices. Throughout, the text comments on issues about the reading and writing of hypertext narratives.","PeriodicalId":91270,"journal":{"name":"HT ... : the proceedings of the ... ACM Conference on Hypertext and Social Media. ACM Conference on Hypertext and Social Media","volume":"1 1","pages":"99-102"},"PeriodicalIF":0.0,"publicationDate":"2012-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91100614","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Breadcrumbs is a folksonomy of news clips, where users can aggregate fragments of text taken from online news. Besides the textual content, each news clip contains a set of metadata fields associated with it. User-defined tags are one of the most important of those information fields. Based on a small data set of news clips, we build a network of co-occurrence of tags in news clips, and use it to improve text clustering. We do this by defining a weighted cosine similarity proximity measure that takes into account both the clip vectors and the tag vectors. The tag weight is computed using the related tags that are present in the discovered community. We then use the resulting vectors together with the new distance metric, which allows us to identify socially biased document clusters. Our study indicates that using the structural features of the network of tags leads to a positive impact in the clustering process.
{"title":"Using the overlapping community structure of a network of tags to improve text clustering","authors":"Nuno Cravino, José Luís Devezas, Á. Figueira","doi":"10.1145/2309996.2310036","DOIUrl":"https://doi.org/10.1145/2309996.2310036","url":null,"abstract":"Breadcrumbs is a folksonomy of news clips, where users can aggregate fragments of text taken from online news. Besides the textual content, each news clip contains a set of metadata fields associated with it. User-defined tags are one of the most important of those information fields. Based on a small data set of news clips, we build a network of co-occurrence of tags in news clips, and use it to improve text clustering. We do this by defining a weighted cosine similarity proximity measure that takes into account both the clip vectors and the tag vectors. The tag weight is computed using the related tags that are present in the discovered community. We then use the resulting vectors together with the new distance metric, which allows us to identify socially biased document clusters. Our study indicates that using the structural features of the network of tags leads to a positive impact in the clustering process.","PeriodicalId":91270,"journal":{"name":"HT ... : the proceedings of the ... ACM Conference on Hypertext and Social Media. ACM Conference on Hypertext and Social Media","volume":"27 2 1","pages":"239-244"},"PeriodicalIF":0.0,"publicationDate":"2012-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79360086","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We propose an efficient approach for detecting communities that share common interests on Twitter, based on linkages among followers of celebrities representing an interest category. This approach differs from existing ones that detects all communities before determining the interest of these communities, a computationally intensive process given the large scale of online social networks. In addition, we also study the characteristics of these communities and the effects of deepening or specialization of interest.
{"title":"Following the follower: detecting communities with common interests on twitter","authors":"Kwan Hui Lim, A. Datta","doi":"10.1145/2309996.2310052","DOIUrl":"https://doi.org/10.1145/2309996.2310052","url":null,"abstract":"We propose an efficient approach for detecting communities that share common interests on Twitter, based on linkages among followers of celebrities representing an interest category. This approach differs from existing ones that detects all communities before determining the interest of these communities, a computationally intensive process given the large scale of online social networks. In addition, we also study the characteristics of these communities and the effects of deepening or specialization of interest.","PeriodicalId":91270,"journal":{"name":"HT ... : the proceedings of the ... ACM Conference on Hypertext and Social Media. ACM Conference on Hypertext and Social Media","volume":"2 1","pages":"317-318"},"PeriodicalIF":0.0,"publicationDate":"2012-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87994531","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
World around us interconnected in giant networks and we are daily navigating and finding paths through such networks. For example, we browse the Web [2], search for connections among friends in social networks, follow leads in citation networks[6, 3], of scientific literature, and look up things in cross-referenced dictionaries and encyclopedias. Even though navigating networks is an essential part of our everyday lives, little is known about the mechanisms humans use to navigate networks as well as the properties of networks that allow for efficient navigation. We conduct two large scale studies of human navigation in networks. First, we present a study an instance of Milgram's small-world experiment where the task is to navigate from a given source to a given target node using only the local network information [5]. We perform a computational analysis of a planetary-scale social network of 240 million people and 1.3 billion edges and investigate the importance of geographic cues for navigating the network. Second, we also discuss a large-scale study of human wayfinding, in which, given a network of links between the concepts of Wikipedia, people play a game of finding a short path from a given start to a given target concept by following hyperlinks (Figure 1) [7]. We study more than 30,000 goal-directed human search paths through Wikipedia network and identify strategies people use when navigating information spaces. Even though the domains of social and information networks are very different, we find many commonalities in navigation of the two networks. Humans tend to be good at finding short paths, despite the fact that the networks are very large [8]. Human paths differ from shortest paths in characteristic ways. At the early stages of the search navigating to a high-degree hub node helps, while in the later stage, content features and geography provide the most important clues. We also observe a trade-off between simplicity and efficiency: conceptually simple solutions are more common but tend to be less efficient than more complex ones [9]. One potential reason for good human performance could be that humans possess vast amounts of background knowledge about the network, which they leverage to make good guesses about possible paths. So we ask the question: Are human-like high-level reasoning skills really necessary for finding short paths? To answer this question, we design a number of navigation agents without such skills, which use only simple numerical features [8]. We evaluate the agents on the task of navigating both networks. We observe that the agents find shorter paths than humans on average and therefore conclude that, perhaps surprisingly, no sophisticated background knowledge or high-level reasoning is required for navigating a complex network.
{"title":"Human navigation in networks","authors":"J. Leskovec","doi":"10.1145/2309996.2310020","DOIUrl":"https://doi.org/10.1145/2309996.2310020","url":null,"abstract":"World around us interconnected in giant networks and we are daily navigating and finding paths through such networks. For example, we browse the Web [2], search for connections among friends in social networks, follow leads in citation networks[6, 3], of scientific literature, and look up things in cross-referenced dictionaries and encyclopedias. Even though navigating networks is an essential part of our everyday lives, little is known about the mechanisms humans use to navigate networks as well as the properties of networks that allow for efficient navigation.\u0000 We conduct two large scale studies of human navigation in networks. First, we present a study an instance of Milgram's small-world experiment where the task is to navigate from a given source to a given target node using only the local network information [5]. We perform a computational analysis of a planetary-scale social network of 240 million people and 1.3 billion edges and investigate the importance of geographic cues for navigating the network. Second, we also discuss a large-scale study of human wayfinding, in which, given a network of links between the concepts of Wikipedia, people play a game of finding a short path from a given start to a given target concept by following hyperlinks (Figure 1) [7]. We study more than 30,000 goal-directed human search paths through Wikipedia network and identify strategies people use when navigating information spaces.\u0000 Even though the domains of social and information networks are very different, we find many commonalities in navigation of the two networks. Humans tend to be good at finding short paths, despite the fact that the networks are very large [8]. Human paths differ from shortest paths in characteristic ways. At the early stages of the search navigating to a high-degree hub node helps, while in the later stage, content features and geography provide the most important clues. We also observe a trade-off between simplicity and efficiency: conceptually simple solutions are more common but tend to be less efficient than more complex ones [9].\u0000 One potential reason for good human performance could be that humans possess vast amounts of background knowledge about the network, which they leverage to make good guesses about possible paths. So we ask the question: Are human-like high-level reasoning skills really necessary for finding short paths? To answer this question, we design a number of navigation agents without such skills, which use only simple numerical features [8]. We evaluate the agents on the task of navigating both networks. We observe that the agents find shorter paths than humans on average and therefore conclude that, perhaps surprisingly, no sophisticated background knowledge or high-level reasoning is required for navigating a complex network.","PeriodicalId":91270,"journal":{"name":"HT ... : the proceedings of the ... ACM Conference on Hypertext and Social Media. ACM Conference on Hypertext and Social Media","volume":"9 1","pages":"143-144"},"PeriodicalIF":0.0,"publicationDate":"2012-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75204415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A key advantage of Adaptive Hypermedia Systems (AHS) is their ability to re-sequence and reintegrate content to satisfy particular user needs. However, this can require large volumes of content, with appropriate granularities and suitable meta-data descriptions. This represents a major impediment to the mainstream adoption of Adaptive Hypermedia. Open Adaptive Hypermedia systems have addressed this challenge by leveraging open corpus content available on the World Wide Web. However, the full reuse potential of such content is yet to be leveraged. Open corpus content is today still mainly available as only one-size-fits-all document-level information objects. Automatically customizing and right-fitting open corpus content with the aim of improving its amenability to reuse would enable AHS to more effectively utilise these resources. This paper presents a novel architecture and service called Slicepedia, which processes open corpus resources for reuse within AHS. The aim of this service is to improve the reuse of open corpus content by right-fitting it to the specific content requirements of individual systems. Complementary techniques from Information Retrieval, Content Fragmentation, Information Extraction and Semantic Web are leveraged to convert the original resources into information objects called slices. The service has been applied in an authentic language elearning scenario to validate the quality of the slicing and reuse. A user trial, involving language learners, was also conducted. The evidence clearly shows that the reuse of open corpus content in AHS is improved by this approach, with minimal decrease in the quality of the original content harvested.
{"title":"Slicepedia: providing customized reuse of open-web resources for adaptive hypermedia","authors":"Killian Levacher, S. Lawless, V. Wade","doi":"10.1145/2309996.2310002","DOIUrl":"https://doi.org/10.1145/2309996.2310002","url":null,"abstract":"A key advantage of Adaptive Hypermedia Systems (AHS) is their ability to re-sequence and reintegrate content to satisfy particular user needs. However, this can require large volumes of content, with appropriate granularities and suitable meta-data descriptions. This represents a major impediment to the mainstream adoption of Adaptive Hypermedia. Open Adaptive Hypermedia systems have addressed this challenge by leveraging open corpus content available on the World Wide Web. However, the full reuse potential of such content is yet to be leveraged. Open corpus content is today still mainly available as only one-size-fits-all document-level information objects. Automatically customizing and right-fitting open corpus content with the aim of improving its amenability to reuse would enable AHS to more effectively utilise these resources.\u0000 This paper presents a novel architecture and service called Slicepedia, which processes open corpus resources for reuse within AHS. The aim of this service is to improve the reuse of open corpus content by right-fitting it to the specific content requirements of individual systems. Complementary techniques from Information Retrieval, Content Fragmentation, Information Extraction and Semantic Web are leveraged to convert the original resources into information objects called slices. The service has been applied in an authentic language elearning scenario to validate the quality of the slicing and reuse. A user trial, involving language learners, was also conducted. The evidence clearly shows that the reuse of open corpus content in AHS is improved by this approach, with minimal decrease in the quality of the original content harvested.","PeriodicalId":91270,"journal":{"name":"HT ... : the proceedings of the ... ACM Conference on Hypertext and Social Media. ACM Conference on Hypertext and Social Media","volume":"33 1","pages":"23-32"},"PeriodicalIF":0.0,"publicationDate":"2012-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76269476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Bjoern Elmar Macek, Christoph Scholz, M. Atzmüller, Gerd Stumme
This paper presents an anatomy of Hypertext 2011 -- focusing on the dynamic and static behavior of the participants. We consider data collected by the CONFERATOR system at the conference, and provide statistics concerning participants, presenters, session chairs, different communities, and according roles. Additionally, we perform an in-depth analysis of these actors during the conference concerning their communication and track visiting behavior.
{"title":"Anatomy of a conference","authors":"Bjoern Elmar Macek, Christoph Scholz, M. Atzmüller, Gerd Stumme","doi":"10.1145/2309996.2310038","DOIUrl":"https://doi.org/10.1145/2309996.2310038","url":null,"abstract":"This paper presents an anatomy of Hypertext 2011 -- focusing on the dynamic and static behavior of the participants. We consider data collected by the CONFERATOR system at the conference, and provide statistics concerning participants, presenters, session chairs, different communities, and according roles. Additionally, we perform an in-depth analysis of these actors during the conference concerning their communication and track visiting behavior.","PeriodicalId":91270,"journal":{"name":"HT ... : the proceedings of the ... ACM Conference on Hypertext and Social Media. ACM Conference on Hypertext and Social Media","volume":"1989 1","pages":"245-254"},"PeriodicalIF":0.0,"publicationDate":"2012-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82304920","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
C. Hargood, Rosamund Davies, D. Millard, Matt R. Taylor, Samuel Brooker
The ACM Hypertext conference has a rich history of challenging the node-link hegemony of the web. At Hypertext 2011 Pisarski [12] suggested that to refocus on nodes in hypertext might unlock a new poetics, and at Hypertext 2001 Bernstein [3] lamented the lack of strange hypertexts: playful tools that experiment with hypertext structure and form. As part of the emerging Strange Hypertexts community project we have been exploring a number of exotic hypertext tools, and in this paper we set out an early experiment with media and creative writing undergraduates to see what effect one particular form -- Fractal Narratives, a hypertext where readers drill down into text in a reoccurring pattern -- would have on their writing. In this particular trial, we found that most students did not engage in the structure from a storytelling point of view, although they did find value from a planning point of view. Participants conceptually saw the value in non-linear storytelling but few exploited the fractal structure to actually do this. Participant feedback leads us to conclude that while new poetics do emerge from strange hypertexts, this should be viewed as an ongoing process that can be reinforced and encouraged by designing tools that highlight and support those emerging poetics in a series of feedback loops, and by providing writing contexts where they can be highlighted and collaboratively explored.
{"title":"Exploring (the poetics of) strange (and fractal) hypertexts","authors":"C. Hargood, Rosamund Davies, D. Millard, Matt R. Taylor, Samuel Brooker","doi":"10.1145/2309996.2310027","DOIUrl":"https://doi.org/10.1145/2309996.2310027","url":null,"abstract":"The ACM Hypertext conference has a rich history of challenging the node-link hegemony of the web. At Hypertext 2011 Pisarski [12] suggested that to refocus on nodes in hypertext might unlock a new poetics, and at Hypertext 2001 Bernstein [3] lamented the lack of strange hypertexts: playful tools that experiment with hypertext structure and form. As part of the emerging Strange Hypertexts community project we have been exploring a number of exotic hypertext tools, and in this paper we set out an early experiment with media and creative writing undergraduates to see what effect one particular form -- Fractal Narratives, a hypertext where readers drill down into text in a reoccurring pattern -- would have on their writing. In this particular trial, we found that most students did not engage in the structure from a storytelling point of view, although they did find value from a planning point of view. Participants conceptually saw the value in non-linear storytelling but few exploited the fractal structure to actually do this. Participant feedback leads us to conclude that while new poetics do emerge from strange hypertexts, this should be viewed as an ongoing process that can be reinforced and encouraged by designing tools that highlight and support those emerging poetics in a series of feedback loops, and by providing writing contexts where they can be highlighted and collaboratively explored.","PeriodicalId":91270,"journal":{"name":"HT ... : the proceedings of the ... ACM Conference on Hypertext and Social Media. ACM Conference on Hypertext and Social Media","volume":"55 1","pages":"181-186"},"PeriodicalIF":0.0,"publicationDate":"2012-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85004088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
As an imperative channel for fast information propagation, Online Social Networks(OSNs) also have their defects. One of them is the information leakage, i.e., information could be spread via OSNs to the users whom we are not willing to share with. Thus the problem of constructing a circle of trust to share information with as many friends as possible without further spreading it to unwanted targets has become a challenging research topic but still remained open. Our work is the first attempt to study the Maximum Circle of Trust problem seeking to share the information with the maximum expected number of poster's friends such that the information spread to the unwanted targets is brought to its knees. First, we consider a special and more practical case with the two-hop information propagation and a single unwanted target. In this case, we show that this problem is NP-hard, which denies the existence of an exact polynomial-time algorithm. We thus propose a Fully Polynomial-Time Approximation Scheme (FPTAS), which can not only adjust any allowable performance error bound but also run in polynomial time with both the input size and allowed error. FPTAS is the best approximation solution one can ever wish for an NP-hard problem. We next consider the number of unwanted targets is bounded and prove that there does not exist an FPTAS in this case. Instead, we design a Polynomial-Time Approximation Scheme (PTAS) in which the allowable error can also be controlled. Finally, we consider a general case with many hops information propagation and further show its #P-hardness and propose an effective Iterative Circle of Trust Detection (ICTD) algorithm based on a novel greedy function. An extensive experiment on various real-word OSNs has validated the effectiveness of our proposed approximation and ICTD algorithms.
在线社交网络作为信息快速传播的必要渠道,也有其自身的缺陷。其中之一是信息泄露,即信息可能通过osn传播给我们不愿意与之共享的用户。因此,如何构建一个信任圈,与尽可能多的朋友共享信息,而不进一步传播给不想要的目标,已经成为一个具有挑战性的研究课题,但仍然是一个开放的问题。我们的工作是第一次尝试研究最大信任圈问题,寻求与发帖者朋友的最大期望数量共享信息,从而使信息传播到不想要的目标。首先,我们考虑了一种特殊的更实际的情况,即两跳信息传播和一个不需要的目标。在这种情况下,我们证明了这个问题是np困难的,它否认了一个精确多项式时间算法的存在。因此,我们提出了一种完全多项式时间近似方案(FPTAS),它不仅可以调整任何允许的性能误差范围,而且可以在输入大小和允许误差的多项式时间内运行。对于NP-hard问题,FPTAS是最好的近似解。接下来,我们考虑不需要的目标的数量是有界的,并证明在这种情况下不存在FPTAS。相反,我们设计了一个多项式时间近似方案(PTAS),其中允许误差也可以控制。最后,我们考虑了一种具有多跳数信息传播的一般情况,进一步证明了其# p -硬度,并提出了一种有效的基于新型贪婪函数的迭代信任圆检测(ICTD)算法。在各种现实世界的osn上进行的大量实验验证了我们提出的近似和ICTD算法的有效性。
{"title":"Maximizing circle of trust in online social networks","authors":"Yilin Shen, Yu-Song Syu, Dung T. Nguyen, M. Thai","doi":"10.1145/2309996.2310023","DOIUrl":"https://doi.org/10.1145/2309996.2310023","url":null,"abstract":"As an imperative channel for fast information propagation, Online Social Networks(OSNs) also have their defects. One of them is the information leakage, i.e., information could be spread via OSNs to the users whom we are not willing to share with. Thus the problem of constructing a circle of trust to share information with as many friends as possible without further spreading it to unwanted targets has become a challenging research topic but still remained open.\u0000 Our work is the first attempt to study the Maximum Circle of Trust problem seeking to share the information with the maximum expected number of poster's friends such that the information spread to the unwanted targets is brought to its knees. First, we consider a special and more practical case with the two-hop information propagation and a single unwanted target. In this case, we show that this problem is NP-hard, which denies the existence of an exact polynomial-time algorithm. We thus propose a Fully Polynomial-Time Approximation Scheme (FPTAS), which can not only adjust any allowable performance error bound but also run in polynomial time with both the input size and allowed error. FPTAS is the best approximation solution one can ever wish for an NP-hard problem. We next consider the number of unwanted targets is bounded and prove that there does not exist an FPTAS in this case. Instead, we design a Polynomial-Time Approximation Scheme (PTAS) in which the allowable error can also be controlled. Finally, we consider a general case with many hops information propagation and further show its #P-hardness and propose an effective Iterative Circle of Trust Detection (ICTD) algorithm based on a novel greedy function. An extensive experiment on various real-word OSNs has validated the effectiveness of our proposed approximation and ICTD algorithms.","PeriodicalId":91270,"journal":{"name":"HT ... : the proceedings of the ... ACM Conference on Hypertext and Social Media. ACM Conference on Hypertext and Social Media","volume":"30 1 1","pages":"155-164"},"PeriodicalIF":0.0,"publicationDate":"2012-06-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82964705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Zubiaga, Damiano Spina, Enrique Amigó, Julio Gonzalo
We deal with shrinking the stream of tweets for scheduled events in real-time, following two steps: (i) sub-event detection, which determines if something new has occurred, and (ii) tweet selection, which picks a tweet to describe each sub-event. By comparing summaries in three languages to live reports by journalists, we show that simple text analysis methods which do not involve external knowledge lead to summaries that cover 84% of the sub-events on average, and 100% of key types of sub-events (such as goals in soccer).
{"title":"Towards real-time summarization of scheduled events from twitter streams","authors":"A. Zubiaga, Damiano Spina, Enrique Amigó, Julio Gonzalo","doi":"10.1145/2309996.2310053","DOIUrl":"https://doi.org/10.1145/2309996.2310053","url":null,"abstract":"We deal with shrinking the stream of tweets for scheduled events in real-time, following two steps: (i) sub-event detection, which determines if something new has occurred, and (ii) tweet selection, which picks a tweet to describe each sub-event. By comparing summaries in three languages to live reports by journalists, we show that simple text analysis methods which do not involve external knowledge lead to summaries that cover 84% of the sub-events on average, and 100% of key types of sub-events (such as goals in soccer).","PeriodicalId":91270,"journal":{"name":"HT ... : the proceedings of the ... ACM Conference on Hypertext and Social Media. ACM Conference on Hypertext and Social Media","volume":"30 1","pages":"319-320"},"PeriodicalIF":0.0,"publicationDate":"2012-04-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76295519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this study we measure the impact of pre-existing social capital on the efficiency of collaboration among Wikipedia editors. To construct a social network among Wikipedians we look to mutual interaction on the user talk pages of Wikipedia editors. As our data set, we analyze the communication networks associated with 3085 featured articles - the articles of highest quality in the English Wikipedia, comparing it to the networks of 80154 articles of lower quality. As the metric to assess the quality of collaboration, we measure the time of quality promotion from when an article is started until it is promoted to featured article. The study finds that the higher pre-existing social capital of editors working on an article is, the faster the articles they work on reach higher quality status, such as featured articles. The more cohesive and more centralized the collaboration network, and the more network members were already collaborating before starting to work together on an article, the faster the article they work on will be promoted or featured.
{"title":"Social capital increases efficiency of collaboration among Wikipedia editors","authors":"Keiichi Nemoto, P. Gloor, Robert J. Laubacher","doi":"10.1145/1995966.1995997","DOIUrl":"https://doi.org/10.1145/1995966.1995997","url":null,"abstract":"In this study we measure the impact of pre-existing social capital on the efficiency of collaboration among Wikipedia editors. To construct a social network among Wikipedians we look to mutual interaction on the user talk pages of Wikipedia editors. As our data set, we analyze the communication networks associated with 3085 featured articles - the articles of highest quality in the English Wikipedia, comparing it to the networks of 80154 articles of lower quality. As the metric to assess the quality of collaboration, we measure the time of quality promotion from when an article is started until it is promoted to featured article. The study finds that the higher pre-existing social capital of editors working on an article is, the faster the articles they work on reach higher quality status, such as featured articles. The more cohesive and more centralized the collaboration network, and the more network members were already collaborating before starting to work together on an article, the faster the article they work on will be promoted or featured.","PeriodicalId":91270,"journal":{"name":"HT ... : the proceedings of the ... ACM Conference on Hypertext and Social Media. ACM Conference on Hypertext and Social Media","volume":"78 1","pages":"231-240"},"PeriodicalIF":0.0,"publicationDate":"2011-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80903452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}