The widespread adoption of artificial intelligence systems with continuously higher capabilities is causing ethical concerns. The lack of transparency, particularly for state-of-the-art models such as deep neural networks, hinders the applicability of such black-box methods in many domains, like the medical or the financial ones, where model transparency is a mandatory requirement, and hence white-box models are largely preferred over potentially more accurate but opaque techniques.
For this reason, in this paper, we focus on ruleset learning, arguably the most interpretable class of learning techniques. Specifically, we propose Bayes Point Rule Classifier, an ensemble methodology inspired by the Bayes Point Machine, to improve the performance and robustness of rule-based classifiers. In addition, to improve interpretability, we propose a technique to retain the most relevant rules based on their importance, thus increasing the transparency of the ensemble, making it easier to understand its decision-making process.
We also propose FIND-RS, a greedy ruleset learning algorithm that, under mild conditions, guarantees to learn hypothesis with perfect accuracy on the training set while preserving a good generalization capability to unseen data points.
We performed extensive experimentation showing that FIND-RS achieves state-of-the-art classification performance at the cost of a slight increase in the ruleset complexity w.r.t. the competitors. However, when paired with the Bayes Point Rule Classifier, FIND-RS outperforms all the considered baselines.