Hyperopt

I am interested in automating hyperparameter tuning for machine learning models. I favor grid-searches, manually expanding the grid whenever a “best” parameter falls on the edge (see Jason Brownlee’s post on the topic). Today, I came across the Python package Hyperopt and its Scikit-Learn-Specific wrapper Hyperopt-sklearn, which have built-in algorithms for strategically searching a space of parameters. In a quick test, though, Hyperopt-sklearn returned a higher accuracy model than my own “manual” tuning on a random forest. Though, I cannot find in the Hyperopt-sklearn documentation how to specify a cross-validaiton method. So the comparison is likely not a fair one. Hyperopt (non-Sklearn) seems to give more control over the space and cross-validation method employed. I plan to play with that soon to see what things look like with more control.

For comparison:

My tuning:

loo = LeaveOneOut()
classifier = RandomForestClassifier(random_state=1)
parameters = {
    'n_estimators': [10, 20, 30, 40, 50, 60, 70, 80,90 ], #100, 110, 120, 130, 140, 150
               'max_features': ['sqrt', 'log2']
}
clf = GridSearchCV(estimator=classifier, param_grid=parameters, n_jobs=-1, cv=loo, scoring='accuracy')
clf.fit(table_x, np.ravel(table_y))  

Hyperopt-sklearn tuning:

estim = HyperoptEstimator(algo=tpe.suggest,
                          classifier=random_forest('myrf'),
                            max_evals=150)

estim.fit( table_x, table_y )

You’ll notice, though, that I specified my search grid and cross-validation method in my own code. I’m not having much luck figuring out how to do that in Hyperopt-sklearn. However, it seems I can in Hyperopt, which I’ll try next

Previous
Previous

R boot