Interpretation of black-box predictive models

Vladimir Cherkassky, Sauptik Dhar

Research output: Chapter in Book/Report/Conference proceedingChapter

6 Scopus citations

Abstract

Many machine learning applications involve predictive data-analytic modeling using black-box techniques. A common problem in such studies is understanding/ interpretation of estimated nonlinear high-dimensional models. Whereas human users naturally favor simple interpretable models, such models may not be practically feasible with modern adaptive methods such as Support Vector Machines (SVMs),Multilayer Perceptron Networks (MLPs), AdaBoost, etc. This chapter provides a brief survey of the current techniques for visualization and interpretation of SVM-based classification models, and then highlights potential problems with such methods. We argue that, under the VC-theoretical framework, model interpretation cannot be achieved via technical analysis of predictive data-analytic models. That is, any meaningful interpretation should incorporate application domain knowledge outside data analysis.We also describe a simple graphical technique for visualization of SVM classification models.

Original languageEnglish (US)
Title of host publicationMeasures of Complexity
Subtitle of host publicationFestschrift for Alexey Chervonenkis
PublisherSpringer International Publishing
Pages267-286
Number of pages20
ISBN (Electronic)9783319218526
ISBN (Print)9783319218519
DOIs
StatePublished - Oct 5 2015

Fingerprint

Dive into the research topics of 'Interpretation of black-box predictive models'. Together they form a unique fingerprint.

Cite this