Explanation of ensemble models

Josue Obregon, Jae Yoon Jung

Research output: Chapter in Book/Report/Conference proceedingChapterpeer-review

6 Scopus citations

Abstract

Ensemble learning is a type of machine learning, typically supervised learning, that combines the decisions of multiple individual models to improve the classification or regression accuracy. Since their introduction 2 decades ago, ensemble models have been widely used not only in academia but also in practical applications, particularly in data science competitions such as Kaggle because they excel with tabular and structured data. However, understanding the decision mechanisms of such large models is a challenge. In the explainable artificial intelligence (XAI) literature, several studies have attempted to make ensemble models more transparent. Although deep learning has recently attracted significant attention in XAI research, it is still important to introduce methods that help explain ensemble models. This chapter introduces the problem of explaining the ensemble models in detail. Two of the most popular ensemble approaches, bagging and boosting, are first introduced, and then the main factors that make ensemble models difficult to explain are discussed. Later, a taxonomy of ensemble interpretation methods and some representative techniques for each category is also presented.

Original languageEnglish
Title of host publicationHuman-Centered Artificial Intelligence
Subtitle of host publicationResearch and Applications
PublisherElsevier Inc.
Pages51-72
Number of pages22
ISBN (Electronic)9780323856485
ISBN (Print)9780323856492
DOIs
StatePublished - 1 Jan 2022

Keywords

  • AdaBoost
  • Bagging
  • Boosting
  • Ensemble learning
  • Explainable artificial intelligence
  • Explainable machine learning
  • Interpretable machine learning
  • Tree ensembles

Fingerprint

Dive into the research topics of 'Explanation of ensemble models'. Together they form a unique fingerprint.

Cite this