Drawing sound causal inferences from observational data is often challenging for both authors and reviewers. This paper discusses the design and application of an Artificial Intelligence Causal Research Assistant (AIA) that seeks to help authors improve causal inferences and conclusions drawn from epidemiological data in health risk assessments. The AIA-assisted review process provides structured reviews and recommendations for improving the causal reasoning, analyses and interpretations made in scientific papers based on epidemiological data. Causal analysis methodologies range from earlier Bradford-Hill considerations to current causal directed acyclic graph (DAG) and related models. AIA seeks to make these methods more accessible and useful to researchers. AIA uses an external script (a “Causal AI Booster” (CAB) program based on classical AI concepts of slot-filling in frames organized into task hierarchies to complete goals) to guide Large Language Models (LLMs), such as OpenAI's ChatGPT or Google's LaMDA (Bard), to systematically review manuscripts and create both (a) recommendations for what to do to improve analyses and reporting; and (b) explanations and support for the recommendations. Review tables and summaries are completed systematically by the LLM in order. For example, recommendations for how to state and caveat causal conclusions in the Abstract and Discussion sections reflect previous analyses of the Study Design and Data Analysis sections. This work illustrates how current AI can contribute to reviewing and providing constructive feedback on research documents. We believe that such AI-assisted review shows promise for enhancing the quality of causal reasoning and exposition in epidemiological studies. It suggests the potential for effective human-AI collaboration in scientific authoring and review processes.