Pub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09722
Stephen Hausler, David Hall, Sutharsan Mahendren, Peyman Moghadam
Neural fields, coordinate-based neural networks, have recently gained popularity for implicitly representing a scene. In contrast to classical methods that are based on explicit representations such as point clouds, neural fields provide a continuous scene representation able to represent 3D geometry and appearance in a way which is compact and ideal for robotics applications. However, limited prior methods have investigated registering multiple neural fields by directly utilising these continuous implicit representations. In this paper, we present Reg-NF, a neural fields-based registration that optimises for the relative 6-DoF transformation between two arbitrary neural fields, even if those two fields have different scale factors. Key components of Reg-NF include a bidirectional registration loss, multi-view surface sampling, and utilisation of volumetric signed distance functions (SDFs). We showcase our approach on a new neural field dataset for evaluating registration problems. We provide an exhaustive set of experiments and ablation studies to identify the performance of our approach, while also discussing limitations to provide future direction to the research community on open challenges in utilizing neural fields in unconstrained environments.
{"title":"Reg-NF: Efficient Registration of Implicit Surfaces within Neural Fields","authors":"Stephen Hausler, David Hall, Sutharsan Mahendren, Peyman Moghadam","doi":"10.48550/arXiv.2402.09722","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09722","url":null,"abstract":"Neural fields, coordinate-based neural networks, have recently gained popularity for implicitly representing a scene. In contrast to classical methods that are based on explicit representations such as point clouds, neural fields provide a continuous scene representation able to represent 3D geometry and appearance in a way which is compact and ideal for robotics applications. However, limited prior methods have investigated registering multiple neural fields by directly utilising these continuous implicit representations. In this paper, we present Reg-NF, a neural fields-based registration that optimises for the relative 6-DoF transformation between two arbitrary neural fields, even if those two fields have different scale factors. Key components of Reg-NF include a bidirectional registration loss, multi-view surface sampling, and utilisation of volumetric signed distance functions (SDFs). We showcase our approach on a new neural field dataset for evaluating registration problems. We provide an exhaustive set of experiments and ablation studies to identify the performance of our approach, while also discussing limitations to provide future direction to the research community on open challenges in utilizing neural fields in unconstrained environments.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"18 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139963567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09989
Jinyuan Li, Han Li, Di Sun, Jiahao Wang, Wenkun Zhang, Zan Wang, Gang Pan
Grounded Multimodal Named Entity Recognition (GMNER) is a nascent multimodal task that aims to identify named entities, entity types and their corresponding visual regions. GMNER task exhibits two challenging properties: 1) The weak correlation between image-text pairs in social media results in a significant portion of named entities being ungroundable. 2) There exists a distinction between coarse-grained referring expressions commonly used in similar tasks (e.g., phrase localization, referring expression comprehension) and fine-grained named entities. In this paper, we propose RiVEG, a unified framework that reformulates GMNER into a joint MNER-VE-VG task by leveraging large language models (LLMs) as a connecting bridge. This reformulation brings two benefits: 1) It maintains the optimal MNER performance and eliminates the need for employing object detection methods to pre-extract regional features, thereby naturally addressing two major limitations of existing GMNER methods. 2) The introduction of entity expansion expression and Visual Entailment (VE) Module unifies Visual Grounding (VG) and Entity Grounding (EG). It enables RiVEG to effortlessly inherit the Visual Entailment and Visual Grounding capabilities of any current or prospective multimodal pretraining models. Extensive experiments demonstrate that RiVEG outperforms state-of-the-art methods on the existing GMNER dataset and achieves absolute leads of 10.65%, 6.21%, and 8.83% in all three subtasks.
{"title":"LLMs as Bridges: Reformulating Grounded Multimodal Named Entity Recognition","authors":"Jinyuan Li, Han Li, Di Sun, Jiahao Wang, Wenkun Zhang, Zan Wang, Gang Pan","doi":"10.48550/arXiv.2402.09989","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09989","url":null,"abstract":"Grounded Multimodal Named Entity Recognition (GMNER) is a nascent multimodal task that aims to identify named entities, entity types and their corresponding visual regions. GMNER task exhibits two challenging properties: 1) The weak correlation between image-text pairs in social media results in a significant portion of named entities being ungroundable. 2) There exists a distinction between coarse-grained referring expressions commonly used in similar tasks (e.g., phrase localization, referring expression comprehension) and fine-grained named entities. In this paper, we propose RiVEG, a unified framework that reformulates GMNER into a joint MNER-VE-VG task by leveraging large language models (LLMs) as a connecting bridge. This reformulation brings two benefits: 1) It maintains the optimal MNER performance and eliminates the need for employing object detection methods to pre-extract regional features, thereby naturally addressing two major limitations of existing GMNER methods. 2) The introduction of entity expansion expression and Visual Entailment (VE) Module unifies Visual Grounding (VG) and Entity Grounding (EG). It enables RiVEG to effortlessly inherit the Visual Entailment and Visual Grounding capabilities of any current or prospective multimodal pretraining models. Extensive experiments demonstrate that RiVEG outperforms state-of-the-art methods on the existing GMNER dataset and achieves absolute leads of 10.65%, 6.21%, and 8.83% in all three subtasks.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"26 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09984
Ravi Hammond, Dustin Craggs, Mingyu Guo, Jakob Foerster, Ian Reid
In many collaborative settings, artificial intelligence (AI) agents must be able to adapt to new teammates that use unknown or previously unobserved strategies. While often simple for humans, this can be challenging for AI agents. For example, if an AI agent learns to drive alongside others (a training set) that only drive on one side of the road, it may struggle to adapt this experience to coordinate with drivers on the opposite side, even if their behaviours are simply flipped along the left-right symmetry. To address this we introduce symmetry-breaking augmentations (SBA), which increases diversity in the behaviour of training teammates by applying a symmetry-flipping operation. By learning a best-response to the augmented set of teammates, our agent is exposed to a wider range of behavioural conventions, improving performance when deployed with novel teammates. We demonstrate this experimentally in two settings, and show that our approach improves upon previous ad hoc teamwork results in the challenging card game Hanabi. We also propose a general metric for estimating symmetry-dependency amongst a given set of policies.
{"title":"Symmetry-Breaking Augmentations for Ad Hoc Teamwork","authors":"Ravi Hammond, Dustin Craggs, Mingyu Guo, Jakob Foerster, Ian Reid","doi":"10.48550/arXiv.2402.09984","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09984","url":null,"abstract":"In many collaborative settings, artificial intelligence (AI) agents must be able to adapt to new teammates that use unknown or previously unobserved strategies. While often simple for humans, this can be challenging for AI agents. For example, if an AI agent learns to drive alongside others (a training set) that only drive on one side of the road, it may struggle to adapt this experience to coordinate with drivers on the opposite side, even if their behaviours are simply flipped along the left-right symmetry. To address this we introduce symmetry-breaking augmentations (SBA), which increases diversity in the behaviour of training teammates by applying a symmetry-flipping operation. By learning a best-response to the augmented set of teammates, our agent is exposed to a wider range of behavioural conventions, improving performance when deployed with novel teammates. We demonstrate this experimentally in two settings, and show that our approach improves upon previous ad hoc teamwork results in the challenging card game Hanabi. We also propose a general metric for estimating symmetry-dependency amongst a given set of policies.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"28 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09828
Chiara Garavelli, A. Aldieri, M. Palanca, Enrico Dall'Ara, M. Viceconti
The incidence of vertebral fragility fracture is increased by the presence of preexisting pathologies such as metastatic disease. Computational tools could support the fracture prediction and consequently the decision of the best medical treatment. Anyway, validation is required to use these tools in clinical practice. To address this necessity, in this study subject-specific homogenized finite element models of single vertebrae were generated from micro CT images for both healthy and metastatic vertebrae and validated against experimental data. More in detail, spine segments were tested under compression and imaged with micro CT. The displacements field could be extracted for each vertebra singularly using the digital volume correlation full-field technique. Homogenized finite element models of each vertebra could hence be built from the micro CT images, applying boundary conditions consistent with the experimental displacements at the endplates. Numerical and experimental displacements and strains fields were eventually compared. In addition, the outcomes of a micro CT based homogenized model were compared to the ones of a clinical-CT based model. Good agreement between experimental and computational displacement fields, both for healthy and metastatic vertebrae, was found. Comparison between micro CT based and clinical-CT based outcomes showed strong correlations. Furthermore, models were able to qualitatively identify the regions which experimentally showed the highest strain concentration. In conclusion, the combination of experimental full-field technique and the in-silico modelling allowed the development of a promising pipeline for validation of fracture risk predictors, although further improvements in both fields are needed to better analyse quantitatively the post-yield behaviour of the vertebra.
{"title":"Validation of homogenized finite element models of human metastatic vertebrae using digital volume correlation","authors":"Chiara Garavelli, A. Aldieri, M. Palanca, Enrico Dall'Ara, M. Viceconti","doi":"10.48550/arXiv.2402.09828","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09828","url":null,"abstract":"The incidence of vertebral fragility fracture is increased by the presence of preexisting pathologies such as metastatic disease. Computational tools could support the fracture prediction and consequently the decision of the best medical treatment. Anyway, validation is required to use these tools in clinical practice. To address this necessity, in this study subject-specific homogenized finite element models of single vertebrae were generated from micro CT images for both healthy and metastatic vertebrae and validated against experimental data. More in detail, spine segments were tested under compression and imaged with micro CT. The displacements field could be extracted for each vertebra singularly using the digital volume correlation full-field technique. Homogenized finite element models of each vertebra could hence be built from the micro CT images, applying boundary conditions consistent with the experimental displacements at the endplates. Numerical and experimental displacements and strains fields were eventually compared. In addition, the outcomes of a micro CT based homogenized model were compared to the ones of a clinical-CT based model. Good agreement between experimental and computational displacement fields, both for healthy and metastatic vertebrae, was found. Comparison between micro CT based and clinical-CT based outcomes showed strong correlations. Furthermore, models were able to qualitatively identify the regions which experimentally showed the highest strain concentration. In conclusion, the combination of experimental full-field technique and the in-silico modelling allowed the development of a promising pipeline for validation of fracture risk predictors, although further improvements in both fields are needed to better analyse quantitatively the post-yield behaviour of the vertebra.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"26 16","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09872
Arman Isajanyan, Artur Shatveryan, David Kocharyan, Zhangyang Wang, Humphrey Shi
Social reward as a form of community recognition provides a strong source of motivation for users of online platforms to engage and contribute with content. The recent progress of text-conditioned image synthesis has ushered in a collaborative era where AI empowers users to craft original visual artworks seeking community validation. Nevertheless, assessing these models in the context of collective community preference introduces distinct challenges. Existing evaluation methods predominantly center on limited size user studies guided by image quality and prompt alignment. This work pioneers a paradigm shift, unveiling Social Reward - an innovative reward modeling framework that leverages implicit feedback from social network users engaged in creative editing of generated images. We embark on an extensive journey of dataset curation and refinement, drawing from Picsart: an online visual creation and editing platform, yielding a first million-user-scale dataset of implicit human preferences for user-generated visual art named Picsart Image-Social. Our analysis exposes the shortcomings of current metrics in modeling community creative preference of text-to-image models' outputs, compelling us to introduce a novel predictive model explicitly tailored to address these limitations. Rigorous quantitative experiments and user study show that our Social Reward model aligns better with social popularity than existing metrics. Furthermore, we utilize Social Reward to fine-tune text-to-image models, yielding images that are more favored by not only Social Reward, but also other established metrics. These findings highlight the relevance and effectiveness of Social Reward in assessing community appreciation for AI-generated artworks, establishing a closer alignment with users' creative goals: creating popular visual art. Codes can be accessed at https://github.com/Picsart-AI-Research/Social-Reward
社交奖励作为一种社区认可形式,为网络平台用户参与和贡献内容提供了强大的动力。文本条件图像合成技术的最新进展开创了一个协作时代,人工智能赋予用户制作原创视觉艺术作品以寻求社区认可的能力。然而,在社区集体偏好的背景下评估这些模型带来了独特的挑战。现有的评估方法主要集中在以图像质量和提示对齐为指导的规模有限的用户研究上。这项工作开创了范式转变的先河,揭开了社交奖励的神秘面纱--这是一个创新的奖励建模框架,它利用了社交网络用户对生成图像进行创意编辑时的隐性反馈。我们从在线视觉创作和编辑平台 Picsart 出发,对数据集进行了广泛的整理和完善,首次建立了百万用户规模的数据集,其中包含了人类对用户生成的视觉艺术的隐性偏好,该数据集被命名为 Picsart Image-Social。我们的分析揭示了当前在对文本到图像模型输出的社区创意偏好进行建模时所使用的指标存在缺陷,这迫使我们引入了一个新的预测模型来明确解决这些局限性。严格的定量实验和用户研究表明,与现有指标相比,我们的 Social Reward 模型更符合社会流行度。此外,我们还利用 "社交奖赏 "对文本到图像的模型进行了微调,得出的图像不仅更受 "社交奖赏 "的青睐,也更受其他既定指标的青睐。这些发现凸显了 Social Reward 在评估社区对人工智能生成的艺术作品的赞赏方面的相关性和有效性,从而与用户的创作目标--创造受欢迎的视觉艺术--建立了更紧密的联系。代码可从以下网址获取:https://github.com/Picsart-AI-Research/Social-Reward
{"title":"Social Reward: Evaluating and Enhancing Generative AI through Million-User Feedback from an Online Creative Community","authors":"Arman Isajanyan, Artur Shatveryan, David Kocharyan, Zhangyang Wang, Humphrey Shi","doi":"10.48550/arXiv.2402.09872","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09872","url":null,"abstract":"Social reward as a form of community recognition provides a strong source of motivation for users of online platforms to engage and contribute with content. The recent progress of text-conditioned image synthesis has ushered in a collaborative era where AI empowers users to craft original visual artworks seeking community validation. Nevertheless, assessing these models in the context of collective community preference introduces distinct challenges. Existing evaluation methods predominantly center on limited size user studies guided by image quality and prompt alignment. This work pioneers a paradigm shift, unveiling Social Reward - an innovative reward modeling framework that leverages implicit feedback from social network users engaged in creative editing of generated images. We embark on an extensive journey of dataset curation and refinement, drawing from Picsart: an online visual creation and editing platform, yielding a first million-user-scale dataset of implicit human preferences for user-generated visual art named Picsart Image-Social. Our analysis exposes the shortcomings of current metrics in modeling community creative preference of text-to-image models' outputs, compelling us to introduce a novel predictive model explicitly tailored to address these limitations. Rigorous quantitative experiments and user study show that our Social Reward model aligns better with social popularity than existing metrics. Furthermore, we utilize Social Reward to fine-tune text-to-image models, yielding images that are more favored by not only Social Reward, but also other established metrics. These findings highlight the relevance and effectiveness of Social Reward in assessing community appreciation for AI-generated artworks, establishing a closer alignment with users' creative goals: creating popular visual art. Codes can be accessed at https://github.com/Picsart-AI-Research/Social-Reward","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"24 5","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962335","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09750
Anqi Wang, Zhizhuo Yin, Yulu Hu, Yuanyuan Mao, Pan Hui
Recently, the potential of large language models (LLMs) has been widely used in assisting programming. However, current research does not explore the artist potential of LLMs in creative coding within artist and AI collaboration. Our work probes the reflection type of artists in the creation process with such collaboration. We compare two common collaboration approaches: invoking the entire program and multiple subtasks. Our findings exhibit artists' different stimulated reflections in two different methods. Our finding also shows the correlation of reflection type with user performance, user satisfaction, and subjective experience in two collaborations through conducting two methods, including experimental data and qualitative interviews. In this sense, our work reveals the artistic potential of LLM in creative coding. Meanwhile, we provide a critical lens of human-AI collaboration from the artists' perspective and expound design suggestions for future work of AI-assisted creative tasks.
{"title":"Exploring the Potential of Large Language Models in Artistic Creation: Collaboration and Reflection on Creative Programming","authors":"Anqi Wang, Zhizhuo Yin, Yulu Hu, Yuanyuan Mao, Pan Hui","doi":"10.48550/arXiv.2402.09750","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09750","url":null,"abstract":"Recently, the potential of large language models (LLMs) has been widely used in assisting programming. However, current research does not explore the artist potential of LLMs in creative coding within artist and AI collaboration. Our work probes the reflection type of artists in the creation process with such collaboration. We compare two common collaboration approaches: invoking the entire program and multiple subtasks. Our findings exhibit artists' different stimulated reflections in two different methods. Our finding also shows the correlation of reflection type with user performance, user satisfaction, and subjective experience in two collaborations through conducting two methods, including experimental data and qualitative interviews. In this sense, our work reveals the artistic potential of LLM in creative coding. Meanwhile, we provide a critical lens of human-AI collaboration from the artists' perspective and expound design suggestions for future work of AI-assisted creative tasks.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"11 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09910
Andr'e V. Duarte, Xuandong Zhao, Arlindo L. Oliveira, Lei Li
How can we detect if copyrighted content was used in the training process of a language model, considering that the training data is typically undisclosed? We are motivated by the premise that a language model is likely to identify verbatim excerpts from its training text. We propose DE-COP, a method to determine whether a piece of copyrighted content was included in training. DE-COP's core approach is to probe an LLM with multiple-choice questions, whose options include both verbatim text and their paraphrases. We construct BookTection, a benchmark with excerpts from 165 books published prior and subsequent to a model's training cutoff, along with their paraphrases. Our experiments show that DE-COP surpasses the prior best method by 9.6% in detection performance (AUC) on models with logits available. Moreover, DE-COP also achieves an average accuracy of 72% for detecting suspect books on fully black-box models where prior methods give $approx$ 4% accuracy. Our code and datasets are available at https://github.com/avduarte333/DE-COP_Method
{"title":"DE-COP: Detecting Copyrighted Content in Language Models Training Data","authors":"Andr'e V. Duarte, Xuandong Zhao, Arlindo L. Oliveira, Lei Li","doi":"10.48550/arXiv.2402.09910","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09910","url":null,"abstract":"How can we detect if copyrighted content was used in the training process of a language model, considering that the training data is typically undisclosed? We are motivated by the premise that a language model is likely to identify verbatim excerpts from its training text. We propose DE-COP, a method to determine whether a piece of copyrighted content was included in training. DE-COP's core approach is to probe an LLM with multiple-choice questions, whose options include both verbatim text and their paraphrases. We construct BookTection, a benchmark with excerpts from 165 books published prior and subsequent to a model's training cutoff, along with their paraphrases. Our experiments show that DE-COP surpasses the prior best method by 9.6% in detection performance (AUC) on models with logits available. Moreover, DE-COP also achieves an average accuracy of 72% for detecting suspect books on fully black-box models where prior methods give $approx$ 4% accuracy. Our code and datasets are available at https://github.com/avduarte333/DE-COP_Method","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"11 20","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962561","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09786
Alvin Grissom II, Ryan F. Lei, Jeova Farias Sales Rocha Neto, Bailey Lin, Ryan Trotter
Generative adversarial networks generate photorealistic faces that are often indistinguishable by humans from real faces. We find that the discriminator in the pre-trained StyleGAN3 model, a popular GAN network, systematically stratifies scores by both image- and face-level qualities and that this disproportionately affects images across gender, race, and other categories. We examine the discriminator's bias for color and luminance across axes perceived race and gender; we then examine axes common in research on stereotyping in social psychology.
生成对抗网络生成的逼真人脸通常无法被人类与真实人脸区分开来。我们发现,预先训练好的 StyleGAN3 模型(一种流行的 GAN 网络)中的判别器会根据图像和人脸级别的特质对得分进行系统分层,这对不同性别、种族和其他类别的图像产生了不成比例的影响。我们研究了判别器在感知种族和性别的轴上对颜色和亮度的偏差;然后我们研究了社会心理学中刻板印象研究中常见的轴。
{"title":"Examining Pathological Bias in a Generative Adversarial Network Discriminator: A Case Study on a StyleGAN3 Model","authors":"Alvin Grissom II, Ryan F. Lei, Jeova Farias Sales Rocha Neto, Bailey Lin, Ryan Trotter","doi":"10.48550/arXiv.2402.09786","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09786","url":null,"abstract":"Generative adversarial networks generate photorealistic faces that are often indistinguishable by humans from real faces. We find that the discriminator in the pre-trained StyleGAN3 model, a popular GAN network, systematically stratifies scores by both image- and face-level qualities and that this disproportionately affects images across gender, race, and other categories. We examine the discriminator's bias for color and luminance across axes perceived race and gender; we then examine axes common in research on stereotyping in social psychology.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"17 16","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-15DOI: 10.48550/arXiv.2402.09809
Owen Henkel, Hannah Horne-Robinson, Nessie Kozhakhmetova, Amanda Lee
This study evaluates the impact of Rori, an AI powered conversational math tutor accessible via WhatsApp, on the math performance of approximately 1,000 students in grades 3-9 across 11 schools in Ghana. Each school was assigned to a treatment group or control group; the students in the control group continued their regular math instruction, while students in the treatment group engaged with Rori, for two 30-minute sessions per week over 8 months in addition to regular math instruction. We find that the math growth scores were substantially higher for the treatment group with an effect size of 0.37, and that the results were statistically significant (p<0.001). The fact that Rori works with basic mobile devices on low-bandwidth data networks gives the intervention strong potential to support personalized learning on other low-and-middle-income countries (LMICs), where laptop ownership and high-speed internet - prerequisite for many video-centered learning platforms - remain extremely limited. While the results should be interpreted judiciously, as they only report on year 1 of the intervention, and future research is necessary to better understand which conditions are necessary for successful implementation, they do suggest that chat-based tutoring solutions leveraging artificial intelligence could offer a costeffective approach to enhancing learning outcomes for millions of students globally.
{"title":"Effective and Scalable Math Support: Evidence on the Impact of an AI- Tutor on Math Achievement in Ghana","authors":"Owen Henkel, Hannah Horne-Robinson, Nessie Kozhakhmetova, Amanda Lee","doi":"10.48550/arXiv.2402.09809","DOIUrl":"https://doi.org/10.48550/arXiv.2402.09809","url":null,"abstract":"This study evaluates the impact of Rori, an AI powered conversational math tutor accessible via WhatsApp, on the math performance of approximately 1,000 students in grades 3-9 across 11 schools in Ghana. Each school was assigned to a treatment group or control group; the students in the control group continued their regular math instruction, while students in the treatment group engaged with Rori, for two 30-minute sessions per week over 8 months in addition to regular math instruction. We find that the math growth scores were substantially higher for the treatment group with an effect size of 0.37, and that the results were statistically significant (p<0.001). The fact that Rori works with basic mobile devices on low-bandwidth data networks gives the intervention strong potential to support personalized learning on other low-and-middle-income countries (LMICs), where laptop ownership and high-speed internet - prerequisite for many video-centered learning platforms - remain extremely limited. While the results should be interpreted judiciously, as they only report on year 1 of the intervention, and future research is necessary to better understand which conditions are necessary for successful implementation, they do suggest that chat-based tutoring solutions leveraging artificial intelligence could offer a costeffective approach to enhancing learning outcomes for millions of students globally.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"17 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-02-15DOI: 10.48550/arXiv.2402.10059
P. Civit, M. A. Dzulfikar, S. Gilbert, R. Guerraoui, J. Komatovic, M. Vidigueira, I. Zablotchi
Byzantine consensus allows n processes to decide on a common value, in spite of arbitrary failures. The seminal Dolev-Reischuk bound states that any deterministic solution to Byzantine consensus exchanges Omega(n^2) bits. In recent years, great advances have been made in deterministic Byzantine agreement for partially synchronous networks, with state-of-the-art cryptographic solutions achieving O(n^2 kappa) bits (where $kappa$ is the security parameter) and nearly matching the lower bound. In contrast, for synchronous networks, optimal solutions with O(n^2) bits, with no cryptography and the same failure tolerance, have been known for more than three decades. Can this gap in network models be closed? In this paper, we present Repeater, the first generic transformation of Byzantine agreement algorithms from synchrony to partial synchrony. Repeater is modular, relying on existing and novel algorithms for its sub-modules. With the right choice of modules, Repeater requires no additional cryptography, is optimally resilient (n = 3t+1, where t is the maximum number of failures) and, for constant-size inputs, preserves the worst-case per-process bit complexity of the transformed synchronous algorithm. Leveraging Repeater, we present the first partially synchronous algorithm that (1) achieves optimal bit complexity (O(n^2) bits), (2) resists a computationally unbounded adversary (no cryptography), and (3) is optimally-resilient (n = 3t+1), thus showing that the Dolev-Reischuk bound is tight in partial synchrony. Moreover, we adapt Repeater for long inputs, introducing several new algorithms with improved complexity and weaker (or completely absent) cryptographic assumptions.
{"title":"Partial synchrony for free? New bounds for Byzantine agreement via a generic transformation across network models","authors":"P. Civit, M. A. Dzulfikar, S. Gilbert, R. Guerraoui, J. Komatovic, M. Vidigueira, I. Zablotchi","doi":"10.48550/arXiv.2402.10059","DOIUrl":"https://doi.org/10.48550/arXiv.2402.10059","url":null,"abstract":"Byzantine consensus allows n processes to decide on a common value, in spite of arbitrary failures. The seminal Dolev-Reischuk bound states that any deterministic solution to Byzantine consensus exchanges Omega(n^2) bits. In recent years, great advances have been made in deterministic Byzantine agreement for partially synchronous networks, with state-of-the-art cryptographic solutions achieving O(n^2 kappa) bits (where $kappa$ is the security parameter) and nearly matching the lower bound. In contrast, for synchronous networks, optimal solutions with O(n^2) bits, with no cryptography and the same failure tolerance, have been known for more than three decades. Can this gap in network models be closed? In this paper, we present Repeater, the first generic transformation of Byzantine agreement algorithms from synchrony to partial synchrony. Repeater is modular, relying on existing and novel algorithms for its sub-modules. With the right choice of modules, Repeater requires no additional cryptography, is optimally resilient (n = 3t+1, where t is the maximum number of failures) and, for constant-size inputs, preserves the worst-case per-process bit complexity of the transformed synchronous algorithm. Leveraging Repeater, we present the first partially synchronous algorithm that (1) achieves optimal bit complexity (O(n^2) bits), (2) resists a computationally unbounded adversary (no cryptography), and (3) is optimally-resilient (n = 3t+1), thus showing that the Dolev-Reischuk bound is tight in partial synchrony. Moreover, we adapt Repeater for long inputs, introducing several new algorithms with improved complexity and weaker (or completely absent) cryptographic assumptions.","PeriodicalId":8425,"journal":{"name":"ArXiv","volume":"12 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-02-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139962702","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}