Particle swarm optimization for hyperparameter tuning in MLJ.
MLJParticleSwarmOptimization offers a suite of different particle swarm algorithms, extending MLJTuning's existing collection of tuning strategies. Currently supported variants and planned releases include:
-
ParticleSwarm
: the original algorithm as conceived by Kennedy and Eberhart [1] -
AdaptiveParticleSwarm
: Zhan et. al.'s variant with adaptive control of swarm coefficients [2] -
OMOPSO
: Sierra and Coello's multi-objective particle swarm variant [3]
This package is registered, and can be installed via the Julia REPL:
julia> ]add MLJParticleSwarmOptimization
Most particle swarm algorithms are designed for problems in continuous domains. To extend support for MLJ's integer NumericRange
and NominalRange
, we encode discrete hyperparameters with an internal continuous representation, as proposed by Strasser et. al. [4]. See the tuning strategies' documentation and reference the paper for more details.
julia> using MLJ, MLJDecisionTreeInterface, MLJParticleSwarmOptimization, Plots, StableRNGs
julia> rng = StableRNG(1234);
julia> X = MLJ.table(rand(rng, 100, 10));
julia> y = 2X.x1 - X.x2 + 0.05*rand(rng, 100);
julia> Tree = @load DecisionTreeRegressor pkg=DecisionTree verbosity=0;
julia> tree = Tree();
julia> forest = EnsembleModel(atom=tree);
julia> r1 = range(forest, :(atom.n_subfeatures), lower=1, upper=9);
julia> r2 = range(forest, :bagging_fraction, lower=0.4, upper=1.0);
julia> self_tuning_forest = TunedModel(
model=forest,
tuning=ParticleSwarm(rng=StableRNG(0)),
resampling=CV(nfolds=6, rng=StableRNG(1)),
range=[r1, r2],
measure=rms,
n=15
);
julia> mach = machine(self_tuning_forest, X, y);
julia> fit!(mach, verbosity=0);
julia> plot(mach)
julia> self_tuning_forest = TunedModel(
model=forest,
tuning=AdaptiveParticleSwarm(rng=StableRNG(0)),
resampling=CV(nfolds=6, rng=StableRNG(1)),
range=[r1, r2],
measure=rms,
n=15
);
julia> mach = machine(self_tuning_forest, X, y);
julia> fit!(mach, verbosity=0);
julia> plot(mach)