This paper presents a novel approach to scientific discovery using an artificial intelligence (AI) environment known as ChatGPT, developed by OpenAI. This is the first paper entirely generated with outputs from ChatGPT. We demonstrate how ChatGPT can be instructed through a gamification environment to define and benchmark hypothetical physical theories. Through this environment, ChatGPT successfully simulates the creation of a new improved model, called GPT4, which combines the concepts of GPT in AI (generative pretrained transformer) and GPT in physics (generalized probabilistic theory). We show that GPT4 can use its built-in mathematical and statistical capabilities to simulate and analyze physical laws and phenomena. As a demonstration of its language capabilities, GPT4 also generates a limerick about itself. Overall, our results demonstrate the promising potential for human-AI collaboration in scientific discovery, as well as the importance of designing systems that effectively integrate AI's capabilities with human intelligence.
Chatbots, or bots for short, are multimodal collaborative assistants that can help people complete useful tasks. Usually, when chatbots are referenced in connection with elections, they often draw negative reactions due to the fear of mis-information and hacking. Instead, in this work, we explore how chatbots may be used to promote voter participation in vulnerable segments of society like senior citizens and first-time voters. In particular, we have built a system that amplifies official information while personalizing it to users' unique needs transparently (e.g., language, cognitive abilities, linguistic abilities). The uniqueness of this work are (a) a safe design where only responses that are grounded and traceable to an allowed source (e.g., official question/answer) will be answered via system's self-awareness (metacognition), (b) a do-not-respond strategy that can handle customizable responses/deflection, and (c) a low-programming design-pattern based on the open-source Rasa platform to generate chatbots quickly for any region. Our current prototypes use frequently asked questions (FAQ) election information for two US states that are low on an ease-of-voting scale, and have performed initial evaluations using focus groups with senior citizens. Our approach can be a win-win for voters, election agencies trying to fulfill their mandate and democracy at large.
We introduce U.S. National Science Foundation's groundbreaking National AI Research Institutes Program. The AI institutes are interdisciplinary collaborations that continue the program's emphasis on tackling larger-scale, longer-time horizon challenges in both foundational and use-inspired AI research, and act as nexus points to address some of society's grand challenges.
The development of AI systems represents a significant investment of funds and time. Assessment is necessary in order to determine whether that investment has paid off. Empirical evaluation of systems in which humans and AI systems act interdependently to accomplish tasks must provide convincing empirical evidence that the work system is learnable and that the technology is usable and useful. We argue that the assessment of human–AI (HAI) systems must be effective but must also be efficient. Bench testing of a prototype of an HAI system cannot require extensive series of large-scale experiments with complex designs. Some of the constraints that are imposed in traditional laboratory research just are not appropriate for the empirical evaluation of HAI systems. We present requirements for avoiding “unnecessary rigor.” They cover study design, research methods, statistical analyses, and online experimentation. These should be applicable to all research intended to evaluate the effectiveness of HAI systems.
Civic engagement is increasingly becoming digital. The ubiquity of computing increases our technologically mediated interactions. Governments have instated various digitization efforts to harness these new facets of virtual life. What remains to be seen is if citizen political opinion, which can inform the inception and effectiveness of public policy, is being accurately captured. Civicbase is an open-source online platform that supports the application of Quadratic Voting Survey for Research (QVSR), a novel survey method. In this paper, we explore QVSR as an effective method for eliciting policy preferences, optimal survey design for prediction, Civicbase's functionalities and technology stack, and Personal AI, an emerging domain, and its relevance to modeling individual political preferences.