We present a machine learning system for forecasting forced displacement populations deployed at the Danish Refugee Council (DRC). The system, named Foresight, supports long-term forecasts aimed at humanitarian response planning. It is explainable, providing evidence and context supporting the forecast. Additionally, it supports scenarios, whereby analysts are able to generate forecasts under alternative conditions. The system has been in deployment since early 2020 and powers several downstream business functions within DRC. It is central to our annual Global Displacement Report, which informs our response planning. We describe the system, key outcomes, lessons learnt, along with technical limitations and challenges in deploying machine learning systems in the humanitarian sector.
The global automobile market experiences quick changes in design preferences. In response to the demand shifts, manufacturers now try to apply new technologies to bring a novel design to market faster. In this paper, we introduce a novel AI application that performs a similarity verification task of wheel designs that aims to solve the real-world problem. Through the deep metric learning approach, we empirically prove that the cross-entropy loss does similar tasks as the pairwise losses do in the embedding space. On Jan 2022, we successfully transitioned the verification system to the wheel design process of Hyundai Motor Company's design team and shortened the verification time by 90% to a maximum of 10 min. With a few clicks, the designers at Hyundai Motor could take advantage of our verification system.
In this paper, we propose a robust election simulation model and independently developed election anomaly detection algorithm that demonstrates the simulation's utility. The simulation generates artificial elections with similar properties and trends as elections from the real world, while giving users control and knowledge over all the important components of the elections. We generate a clean election results dataset without fraud as well as datasets with varying degrees of fraud. We then measure how well the algorithm is able to successfully detect the level of fraud present. The algorithm determines how similar actual election results are as compared to the predicted results from polling and a regression model of other regions that have similar demographics. We use k-means to partition electoral regions into clusters such that demographic homogeneity is maximized among clusters. We then use a novelty detection algorithm implemented as a one-class support vector machine where the clean data is provided in the form of polling predictions and regression predictions. The regression predictions are built from the actual data in such a way that the data supervises itself. We show both the effectiveness of the simulation technique and the machine learning model in its success in identifying fraudulent regions.
The ability to estimate the state of a human partner is an insufficient basis on which to build cooperative agents. Also needed is an ability to predict how people adapt their behavior in response to an agent's actions. We propose a new approach based on computational rationality, which models humans based on the idea that predictions can be derived by calculating policies that are approximately optimal given human-like bounds. Computational rationality brings together reinforcement learning and cognitive modeling in pursuit of this goal, facilitating machine understanding of humans.
Over the last decade, there has been a remarkable surge in interest in automated crowd monitoring within the computer vision community. Modern deep-learning approaches have made it possible to develop fully automated vision-based crowd-monitoring applications. However, despite the magnitude of the issue at hand, the significant technological advancements, and the consistent interest of the research community, there are still numerous challenges that need to be overcome. In this article, we delve into six major areas of visual crowd analysis, emphasizing the key developments in each of these areas. We outline the crucial unresolved issues that must be tackled in future works, in order to ensure that the field of automated crowd monitoring continues to progress and thrive. Several surveys related to this topic have been conducted in the past. Nonetheless, this article thoroughly examines and presents a more intuitive categorization of works, while also depicting the latest breakthroughs within the field, incorporating more recent studies carried out within the last few years in a concise manner. By carefully choosing prominent works with significant contributions in terms of novelty or performance gains, this paper presents a more comprehensive exposition of advancements in the current state-of-the-art.
The recently released BARD and ChatGPT have generated substantial interest from a range of researchers and institutions concerned about the impact on education, medicine, law and more. This paper uses questions from the Watson Jeopardy! Challenge to compare BARD, ChatGPT, and Watson. Using those, Jeopardy! questions, we find that for high confidence Watson questions the three systems perform with similar accuracy as Watson. We also find that both BARD and ChatGPT perform with the accuracy of a human expert and that the sets of their correct answers are rated highly similar using a Tanimoto similarity score. However, in addition, we find that both systems can change their solutions to the same input information on subsequent uses. When given the same Jeopardy! category and question multiple times, both BARD and ChatGPT can generate different and conflicting answers. As a result, the paper examines the characteristics of some of those questions that generate different answers to the same inputs. Finally, the paper discusses some of the implications of finding the different answers and the impact of the lack of reproducibility on testing such systems.
This paper presents a novel approach to scientific discovery using an artificial intelligence (AI) environment known as ChatGPT, developed by OpenAI. This is the first paper entirely generated with outputs from ChatGPT. We demonstrate how ChatGPT can be instructed through a gamification environment to define and benchmark hypothetical physical theories. Through this environment, ChatGPT successfully simulates the creation of a new improved model, called GPT4, which combines the concepts of GPT in AI (generative pretrained transformer) and GPT in physics (generalized probabilistic theory). We show that GPT4 can use its built-in mathematical and statistical capabilities to simulate and analyze physical laws and phenomena. As a demonstration of its language capabilities, GPT4 also generates a limerick about itself. Overall, our results demonstrate the promising potential for human-AI collaboration in scientific discovery, as well as the importance of designing systems that effectively integrate AI's capabilities with human intelligence.