内容简介:is a package that makes it trivial to create complex ML pipeline structures using simple expressions. Using Julia macro programming features, it becomes trivial to symbolically process and manipulate the pipeline expressions and its elements to automatica
Documentation | Build Status | Help |
---|
AutoMLPipeline
is a package that makes it trivial to create complex ML pipeline structures using simple expressions. Using Julia macro programming features, it becomes trivial to symbolically process and manipulate the pipeline expressions and its elements to automatically discover optimal structures for machine learning prediction and classification.
Load the AutoMLPipeline package and submodules
using AutoMLPipeline, AutoMLPipeline.FeatureSelectors, AutoMLPipeline.EnsembleMethods using AutoMLPipeline.CrossValidators, AutoMLPipeline.DecisionTreeLearners, AutoMLPipeline.Pipelines using AutoMLPipeline.BaseFilters, AutoMLPipeline.SKPreprocessors, AutoMLPipeline.Utils
Load some of filters, transformers, learners to be used in a pipeline
#### Decomposition pca = SKPreprocessor("PCA"); fa = SKPreprocessor("FactorAnalysis"); ica = SKPreprocessor("FastICA") #### Scaler rb = SKPreprocessor("RobustScaler"); pt = SKPreprocessor("PowerTransformer"); norm = SKPreprocessor("Normalizer"); mx = SKPreprocessor("MinMaxScaler") #### categorical preprocessing ohe = OneHotEncoder() #### Column selector catf = CatFeatureSelector(); numf = NumFeatureSelector() #### Learners rf = SKLearner("RandomForestClassifier"); gb = SKLearner("GradientBoostingClassifier") lsvc = SKLearner("LinearSVC"); svc = SKLearner("SVC") mlp = SKLearner("MLPClassifier"); ada = SKLearner("AdaBoostClassifier") jrf = RandomForest(); vote = VoteEnsemble(); stack = StackEnsemble(); best = BestLearner();
Load data
using CSV profbdata = CSV.read(joinpath(dirname(pathof(AutoMLPipeline)),"../data/profb.csv")) X = profbdata[:,2:end] Y = profbdata[:,1] |> Vector; head(x)=first(x,5) head(profbdata)
Filter categories and hot-encode them
pohe = @pipeline catf |> ohe tr = fit_transform!(pohe,X,Y) head(tr)
Filter numeric features, compute ica and pca features, and combine both features
pdec = @pipeline (numf |> pca) + (numf |> ica) tr = fit_transform!(pdec,X,Y) head(tr)
A pipeline expression example for classification using the Voting Ensemble learner
# take all categorical columns and hotbit encode each, # concatenate them to the numerical features, # and feed them to the voting ensemble pvote = @pipeline (catf |> ohe) + (numf) |> vote pred = fit_transform!(pvote,X,Y) sc=score(:accuracy,pred,Y) println(sc) ### cross-validate crossvalidate(pvote,X,Y,"accuracy_score",5)
Print corresponding function call of the pipeline expression
@pipelinex (catf |> ohe) + (numf) |> vote # outputs: :(Pipeline(ComboPipeline(Pipeline(catf, ohe), numf), vote))
Another pipeline example using the RandomForest learner
# combine the pca, ica, fa of the numerical columns, # combine them with the hot-bit encoded categorial features # and feed all to the random forest classifier prf = @pipeline (numf |> rb |> pca) + (numf |> rb |> ica) + (catf |> ohe) + (numf |> rb |> fa) |> rf pred = fit_transform!(prf,X,Y) score(:accuracy,pred,Y) |> println crossvalidate(prf,X,Y,"accuracy_score",5)
A pipeline for the Linear Support Vector for Classification
plsvc = @pipeline ((numf |> rb |> pca)+(numf |> rb |> fa)+(numf |> rb |> ica)+(catf |> ohe )) |> lsvc pred = fit_transform!(plsvc,X,Y) score(:accuracy,pred,Y) |> println crossvalidate(plsvc,X,Y,"accuracy_score",5)
Extending AutoMLPipeline
# If you want to add your own filter/transformer/learner, it is trivial. # Just take note that filters and transformers expect one input argument # while learners expect input and output arguments in the fit! function. # transform! function always expect one input argument in all cases. # First, import the abstract types and define your own mutable structure # as subtype of either Learner or Transformer. Also load the DataFrames package using DataFrames import AutoMLPipeline.AbsTypes: fit!, transform! #for function overloading export fit!, transform!, MyFilter # define your filter structure mutable struct MyFilter <: Transformer variables here.... function MyFilter() .... end end #define your fit! function. # filters and transformer ignore Y argument. # learners process both X and Y arguments. function fit!(fl::MyFilter, X::DataFrame, Y::Vector=Vector()) .... end #define your transform! function function transform!(fl::MyFilter, X::DataFrame)::DataFrame .... end # Note that the main data interchange format is a dataframe so transform! # output should always be a dataframe as well as the input for fit! and transform!. # This is necessary so that the pipeline passes the dataframe format consistently to # its filters/transformers/learners. Once you have this filter, you can use # it as part of the pipeline together with the other learners and filters.
Feature Requests and Contributions
We welcome contributions, feature requests, and suggestions. Here is the link to open an issue for any problems you encounter. If you want to contribute, please follow the guidelines in contributors page .
Help usage
Usage questions can be posted in:
以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
产品心经:产品经理应该知道的60件事(第2版)
闫荣 / 机械工业出版社 / 2016-4 / 69.00
本书第一版出版后广获好评,应广大读者要求,作者把自己在实践中新近总结的10个关于产品的最佳实践融入到了这本新书中。这"10件事"侧重于深挖产品需求和产品疯传背后的秘密,配合之前的"50件事",不仅能帮产品经理打造出让用户尖叫并疯传的产品,还能帮助产品经理迅速全方位提升自己的能力。 本书作者有超过10年的产品工作经验,在互联网产品领域公认的大咖,这本书从产品经理核心素养、产品认知、战略与规划、......一起来看看 《产品心经:产品经理应该知道的60件事(第2版)》 这本书的介绍吧!
HTML 压缩/解压工具
在线压缩/解压 HTML 代码
XML 在线格式化
在线 XML 格式化压缩工具