Abstract : Argumentation frameworks have been used as tools for reconciliating ontology alignments, through a series of proposals and counter-proposals, i.e., arguments. However, argumentation outcomes may not be so obvious to human users. Explaining the reasoning behind the argumentation process may help users to understand its outcome, and influence the user's confidence and acceptance on the results. This paper presents a mechanism for providing explanations on the way agreed alignments are established. Our mechanism is based on tracing each step of the argumentation process. These traces are then interpreted using a set of association rules, built from a decision tree that represents all possible statuses of arguments. From these rules, a multi-level explanation, in natural language, is provided to the users.