Centrality measures were first introduced in the Social Network Analysis (SNA), which is widely used in the humanities and social sciences. However, recently they are receiving attention in network science in general, e.g., for knowledge graphs and artificial intelligence. Assuming that the network is not error-free or contains multiple layers, we can identify challenges for their quality. For example, historical and narrative texts in ancient languages are usually challenging for natural language processing methods and artificial intelligence technologies developed for modern languages due to their complexity and missing models. In addition, due to quiet sources, nodes and edges in the network may be missing, which might influence the results of SNA. Other aspects may also have an effect on data, as many networks include additional layers, e.g. spatial information or archaeological artifacts. This extends to knowledge graphs. In this paper, we will summarize, compare and evaluate existing and novel methods to analyze the robustness of networks. We introduce a method with different removal strategies to analyze how additional or missing layers, nodes, or edges in a random network influence centrality measures. We can show that the robustness of social networks regardless of the error measures heavily relies on network structures, which brings up several new challenges for future research. In general, we may assume networks to be rather robust against small errors and few missing data if they follow a scale-free distribution. The results of this paper are not limited to social networks but can be applied in all fields working with centrality measures.