Meta-classifiers

Since I’m working on tuning many types hyperparameters of machine learning models at once, why not also combine all of their predictions and see how well that meta-classifier performs? Meta-classifier models used a lot in Kaggle competitions. The big idea is that if you trained, say, 5 different model types, then the meta classifier will be at least as good as your best model. So it’s a cheap way to get a boost in performance. I like this simple figure from the mlxtend docs showing the function of a meta-classifier.

The prediction from each model contributes to the prediction of the meta-classifier. Source.

The prediction from each model contributes to the prediction of the meta-classifier. Source.

The final prediction can be based on votes, averages, or even using the probabilities from each model as features in the meta-model (say a logistic regression).

Mlxtend seems to make this process very easy if you’re already using Scikit-Learn.

Jason Brownlee also has a tutorial that uses only Scikit-Learn to build a meta-classifier and a meta-regressor.

Previous
Previous

Pickle a dictionary

Next
Next

Pipelines and imputation