TY - JOUR
T1 - A hybrid classification algorithm by subspace partitioning through semi-supervised decision tree
AU - Kim, Kyoungok
N1 - Publisher Copyright:
© 2016 Elsevier Ltd
PY - 2016/12/1
Y1 - 2016/12/1
N2 - Among data mining techniques, the decision tree is one of the more widely used methods for building classification models in the real world because of its simplicity and ease of interpretation. However, the method has some drawbacks, including instability, the nonsmooth nature of the decision boundary, and the possibility of overfitting. To overcome these problems, several works have utilized the relative advantages of other classifiers, such as logistic regression, support vector machine, and neural networks, in combination with a decision tree, in hybrid models which avoid the drawbacks of other models. Some hybrid models have used decision trees to quickly and efficiently partition the input space, and many studies have proved the effectiveness of the hybrid methods. However, there is room for further improvement by considering the topological properties of a dataset, because typical decision trees split nodes based only on the target variable. The proposed semi-supervised decision tree splits internal nodes by utilizing both labels and the structural characteristics of data for subspace partitioning, to improve the accuracy of classifiers applied to terminal nodes in the hybrid models. Experimental results confirm the superiority of the proposed algorithm and demonstrate the detailed characteristics of the algorithm.
AB - Among data mining techniques, the decision tree is one of the more widely used methods for building classification models in the real world because of its simplicity and ease of interpretation. However, the method has some drawbacks, including instability, the nonsmooth nature of the decision boundary, and the possibility of overfitting. To overcome these problems, several works have utilized the relative advantages of other classifiers, such as logistic regression, support vector machine, and neural networks, in combination with a decision tree, in hybrid models which avoid the drawbacks of other models. Some hybrid models have used decision trees to quickly and efficiently partition the input space, and many studies have proved the effectiveness of the hybrid methods. However, there is room for further improvement by considering the topological properties of a dataset, because typical decision trees split nodes based only on the target variable. The proposed semi-supervised decision tree splits internal nodes by utilizing both labels and the structural characteristics of data for subspace partitioning, to improve the accuracy of classifiers applied to terminal nodes in the hybrid models. Experimental results confirm the superiority of the proposed algorithm and demonstrate the detailed characteristics of the algorithm.
KW - Decision tree
KW - Inhomogeneous measure
KW - Semi-supervised decision tree
KW - Subspace partitioning
UR - http://www.scopus.com/inward/record.url?scp=84994805867&partnerID=8YFLogxK
U2 - 10.1016/j.patcog.2016.04.016
DO - 10.1016/j.patcog.2016.04.016
M3 - Article
AN - SCOPUS:84994805867
SN - 0031-3203
VL - 60
SP - 157
EP - 163
JO - Pattern Recognition
JF - Pattern Recognition
ER -