Chatbots, or bots for short, are multimodal collaborative assistants that can help people complete useful tasks. Usually, when chatbots are referenced in connection with elections, they often draw negative reactions due to the fear of mis-information and hacking. Instead, in this work, we explore how chatbots may be used to promote voter participation in vulnerable segments of society like senior citizens and first-time voters. In particular, we have built a system that amplifies official information while personalizing it to users' unique needs transparently (e.g., language, cognitive abilities, linguistic abilities). The uniqueness of this work are (a) a safe design where only responses that are grounded and traceable to an allowed source (e.g., official question/answer) will be answered via system's self-awareness (metacognition), (b) a do-not-respond strategy that can handle customizable responses/deflection, and (c) a low-programming design-pattern based on the open-source Rasa platform to generate chatbots quickly for any region. Our current prototypes use frequently asked questions (FAQ) election information for two US states that are low on an ease-of-voting scale, and have performed initial evaluations using focus groups with senior citizens. Our approach can be a win-win for voters, election agencies trying to fulfill their mandate and democracy at large.
We introduce U.S. National Science Foundation's groundbreaking National AI Research Institutes Program. The AI institutes are interdisciplinary collaborations that continue the program's emphasis on tackling larger-scale, longer-time horizon challenges in both foundational and use-inspired AI research, and act as nexus points to address some of society's grand challenges.
The development of AI systems represents a significant investment of funds and time. Assessment is necessary in order to determine whether that investment has paid off. Empirical evaluation of systems in which humans and AI systems act interdependently to accomplish tasks must provide convincing empirical evidence that the work system is learnable and that the technology is usable and useful. We argue that the assessment of human–AI (HAI) systems must be effective but must also be efficient. Bench testing of a prototype of an HAI system cannot require extensive series of large-scale experiments with complex designs. Some of the constraints that are imposed in traditional laboratory research just are not appropriate for the empirical evaluation of HAI systems. We present requirements for avoiding “unnecessary rigor.” They cover study design, research methods, statistical analyses, and online experimentation. These should be applicable to all research intended to evaluate the effectiveness of HAI systems.
Civic engagement is increasingly becoming digital. The ubiquity of computing increases our technologically mediated interactions. Governments have instated various digitization efforts to harness these new facets of virtual life. What remains to be seen is if citizen political opinion, which can inform the inception and effectiveness of public policy, is being accurately captured. Civicbase is an open-source online platform that supports the application of Quadratic Voting Survey for Research (QVSR), a novel survey method. In this paper, we explore QVSR as an effective method for eliciting policy preferences, optimal survey design for prediction, Civicbase's functionalities and technology stack, and Personal AI, an emerging domain, and its relevance to modeling individual political preferences.
Our paper aims to analyze political polarization in US political system using language models, and thereby help candidates make an informed decision. The availability of this information will help voters understand their candidates' views on the economy, healthcare, education, and other social issues. Our main contributions are a dataset extracted from Wikipedia that spans the past 120 years and a language model-based method that helps analyze how polarized a candidate is. Our data are divided into two parts, background information and political information about a candidate, since our hypothesis is that the political views of a candidate should be based on reason and be independent of factors such as birthplace, alma mater, and so forth. We further split this data into four phases chronologically, to help understand if and how the polarization amongst candidates changes. This data has been cleaned to remove biases. To understand the polarization, we begin by showing results from some classical language models in Word2Vec and Doc2Vec. And then use more powerful techniques like the Longformer, a transformer-based encoder, to assimilate more information and find the nearest neighbors of each candidate based on their political view and their background. The code and data for the project will be available here: “https://github.com/samirangode/Understanding_Polarization”