Machine learning (ML) will most likely play a large role in many processes in the future, also in the insurance industry. However, ML models are at risk of being attacked and manipulated. A model compromised by a backdoor attack loses its integrity and can no longer be deemed trustworthy. Ensuring the trustworthiness of ML models is crucial, as compromised models can lead to significant financial and reputational damage for insurance companies. In this work the robustness of Gradient Boosted Decision Tree (GBDT) models and Deep Neural Networks (DNN) within an insurance context is evaluated. Therefore, two GBDT models and two DNNs are trained on two different tabular datasets from an insurance context. Past research in this domain mainly used homogeneous data and there are comparably little insights regarding heterogeneous tabular data. The ML tasks performed on the datasets are claim prediction (regression) and fraud detection (binary classification). For the backdoor attacks different samples containing a specific pattern were crafted and added to the training data. It is shown, that this type of attack can be highly successful, even with a few added samples. The backdoor attacks worked well on the models trained on one dataset but poorly on the models trained on the other. In real-world scenarios the attacker will have to face several obstacles but as attacks can work with very few added samples this risk should be evaluated. Therefore, understanding and mitigating these risks is essential for the reliable deployment of ML in critical applications.