This package is deprecated.
Master Build | Test Coverage | Discussion |
---|---|---|
LearningStrategies is a modular framework for building iterative algorithms in Julia.
Below, some of the key concepts are briefly explained, and a few examples are made. A more in-depth notebook can be found here
Many algorithms can be generalized to the following pseudocode:
setup
while not finished:
(update model)
(iteration logic)
cleanup
The core function of LearningStrategies is a straightforward abstract implementation
of the above loop. A model
can be learned by an LearningStrategy
or a collection of
strategies in a MetaStrategy
.
function learn!(model, strat::LearningStrategy, data)
setup!(strat, model[, data])
for (i, item) in enumerate(data)
update!(model, strat[, i], item)
hook(strat, model[, data], i)
finished(strat, model[, data], i) && break
end
cleanup!(strat, model)
model
end
- For a
MetaStrategy
, each function (setup!
,update!
,hook
,finished
,cleanup!
) is mapped to the contained strategies. - To let
item == data
, pass the argumentIterators.repeated(data)
.
See help (i.e. ?MaxIter
) for more info.
MetaStrategy
MaxIter
TimeLimit
Converged
ConvergedTo
IterFunction
Tracer
Breaker
Verbose
julia> using LearningStrategies
julia> s = Verbose(TimeLimit(2))
Verbose TimeLimit(2.0)
julia> @elapsed learn!(nothing, s) # data == InfiniteNothing()
INFO: TimeLimit(2.0) finished
2.000225545
julia> using LearningStrategies
julia> s = strategy(Verbose(MaxIter(5)), TimeLimit(10))
MetaStrategy
> Verbose MaxIter(5)
> TimeLimit(10.0)
julia> learn!(nothing, s, 1:100)
INFO: MaxIter: 1/5
INFO: MaxIter: 2/5
INFO: MaxIter: 3/5
INFO: MaxIter: 4/5
INFO: MaxIter: 5/5
INFO: MaxIter(5) finished
using LearningStrategies
import LearningStrategies: update!, finished
import Base.Iterators: repeated
struct MyLinearModel
coef
end
struct MyLinearModelSolver <: LearningStrategy end
update!(model, s::MyLinearModelSolver, xy) = (model.coef[:] = xy[1] \ xy[2])
finished(s::MyLinearModelSolver, model) = true
# generate some fake data
x = randn(100, 5)
y = x * range(-1, stop=1, length=5) + randn(100)
data = (x, y)
# Create the model
model = MyLinearModel(zeros(5))
# learn! the model with data (x, y)
learn!(model, MyLinearModelSolver(), repeated(data))
# check that it works
model.coef == x \ y
There are some user contributed snippets in the examples
dir.
dftracer.jl
shows a tracer with DataFrame as underlying storage.
LearningStrategies is partially inspired by IterationManagers and (Tom Breloff's) conversations with Spencer Lyon. This functionality was previously part of the StochasticOptimization package, but was split off as a dependency.
Complex LearningStrategy examples (using previous LearningStrategies versions) can be found in StochasticOptimization and from Tom Breloff's blog posts.
Examples using the current version can be found in SparseRegression.