Documentation | Build Status | Help |
---|---|---|
TSMLextra extends TSML machine learning models by incorporating ScikitLearn and Caret libraries through a common API.
TSMLextra relies on PyCall.jl and RCall.jl to expose external ML libraries using a common API for heterogenous combination of ML ensembles. It introduces three types of ensembles: VoteEnsemble, StackEnsemble, and BestEnsemble. Each ensemble allows heterogenous combinations of ML libraries from R, Python, and Julia.
The design/framework of this package is influenced heavily by Samuel Jenkins' Orchestra.jl and Paulito Palmes CombineML.jl packages.
- extends TSML to include external machine learning libraries from R's caret and Python's scikitlearn
- uses common API wrappers for ML training and prediction of heterogenous libraries
TSMLextra is in the Julia Official package registry. The latest release can be installed at the Julia prompt using Julia's package management which is triggered by pressing ]
at the julia prompt:
julia> ]
(v1.1) pkg> add TSMLextra
Or, equivalently, via the Pkg
API:
julia> using Pkg
julia> Pkg.add("TSMLextra")
- STABLE — documentation of the most recently tagged version.
- DEVEL — documentation of the in-development version.
TSMLextra is tested and actively developed on Julia 1.0
and above for Linux and macOS.
There is no support for Julia versions 0.4
, 0.5
, 0.6
and 0.7
.
TSMLextra allows mixing of heterogenous ML libraries from Python's ScikitLearn, R's Caret, and Julia using a common API for seamless ensembling to create complex models for robust time-series prediction.
Generally, you will need the different transformers and utils in TSML for time-series processing. To use them, it is standard in TSML code to have the following declared at the topmost part of your application:
using TSML
using TSMLextra
# Setup source data and filters to aggregate and impute hourly
fname = joinpath(dirname(pathof(TSML)),"../data/testdata.csv")
csvreader = DataReader(Dict(:filename=>fname,:dateformat=>"dd/mm/yyyy HH:MM"))
valgator = DateValgator(Dict(:dateinterval=>Dates.Hour(1))) # aggregator
valnner = DateValNNer(Dict(:dateinterval=>Dates.Hour(1))) # imputer
stfier = Statifier(Dict(:processmissing=>true)) # get statistics
mono = Monotonicer(Dict()) # normalize monotonic data
outnicer = Outliernicer(Dict(:dateinterval => Dates.Hour(1))) # normalize outliers
# Setup pipeline without imputation and run
mpipeline1 = @pipeline csvreader |> valgator |> stfier
respipe1 = fit_transform!(mpipeline1)
# Show statistics including blocks of missing data stats
@show respipe1
# Add imputation in the pipeline and rerun
mpipeline2 = @pipeline csvreader |> valgator |> valnner |> stfier
respipe2 = fit_transform!(mpipeline2)
# Show statistics including blocks of missing data stats
@show respipe2
# Add imputation in the pipeline and rerun
mpipeline2 = @pipeline csvreader |> valgator |> valnner |> outnicer
respipe2 = fit_transform!(mpipeline2)
# Show statistics including blocks of missing data stats
@show respipe2
# Add imputation in the pipeline and rerun
mpipeline2 = @pipeline csvreader |> valgator |> valnner |> mono
respipe2 = fit_transform!(mpipeline2)
# Show statistics including blocks of missing data stats
@show respipe2
iris = getiris()
X = iris[:,1:4]
Y = iris[:,5] |> Vector
ohe=OneHotEncoder()
rf = RandomForest()
numf = NumFeatureSelector()
catf = CatFeatureSelector()
pca = SKPreprocessor(Dict(:preprocessor=>"PCA"))
ppp = @pipeline (catf |> ohe ) + numf |> rf
crossvalidate(ppp,X,Y)
We welcome contributions, feature requests, and suggestions. Here is the link to open an issue for any problems you encounter. If you want to contribute, please follow the guidelines in contributors page.
Usage questions can be posted in: