Automlpipeline.jl – machine learning tooling from IBM

栏目: IT技术 · 发布时间: 4年前

内容简介:is a package that makes it trivial to create complex ML pipeline structures using simple expressions. Using Julia macro programming features, it becomes trivial to symbolically process and manipulate the pipeline expressions and its elements to automatica
Documentation Build Status Help

AutoMLPipeline

is a package that makes it trivial to create complex ML pipeline structures using simple expressions. Using Julia macro programming features, it becomes trivial to symbolically process and manipulate the pipeline expressions and its elements to automatically discover optimal structures for machine learning prediction and classification.

Load the AutoMLPipeline package and submodules

using AutoMLPipeline, AutoMLPipeline.FeatureSelectors, AutoMLPipeline.EnsembleMethods
using AutoMLPipeline.CrossValidators, AutoMLPipeline.DecisionTreeLearners, AutoMLPipeline.Pipelines
using AutoMLPipeline.BaseFilters, AutoMLPipeline.SKPreprocessors, AutoMLPipeline.Utils

Load some of filters, transformers, learners to be used in a pipeline

#### Decomposition
pca = SKPreprocessor("PCA"); fa = SKPreprocessor("FactorAnalysis"); ica = SKPreprocessor("FastICA")

#### Scaler 
rb = SKPreprocessor("RobustScaler"); pt = SKPreprocessor("PowerTransformer"); 
norm = SKPreprocessor("Normalizer"); mx = SKPreprocessor("MinMaxScaler")

#### categorical preprocessing
ohe = OneHotEncoder()

#### Column selector
catf = CatFeatureSelector(); 
numf = NumFeatureSelector()

#### Learners
rf = SKLearner("RandomForestClassifier"); 
gb = SKLearner("GradientBoostingClassifier")
lsvc = SKLearner("LinearSVC");     svc = SKLearner("SVC")
mlp = SKLearner("MLPClassifier");  ada = SKLearner("AdaBoostClassifier")
jrf = RandomForest();              vote = VoteEnsemble();
stack = StackEnsemble();           best = BestLearner();

Load data

using CSV
profbdata = CSV.read(joinpath(dirname(pathof(AutoMLPipeline)),"../data/profb.csv"))
X = profbdata[:,2:end] 
Y = profbdata[:,1] |> Vector;
head(x)=first(x,5)
head(profbdata)

Filter categories and hot-encode them

pohe = @pipeline catf |> ohe
tr = fit_transform!(pohe,X,Y)
head(tr)

Filter numeric features, compute ica and pca features, and combine both features

pdec = @pipeline (numf |> pca) + (numf |> ica)
tr = fit_transform!(pdec,X,Y)
head(tr)

A pipeline expression example for classification using the Voting Ensemble learner

# take all categorical columns and hotbit encode each, 
# concatenate them to the numerical features,
# and feed them to the voting ensemble
pvote = @pipeline  (catf |> ohe) + (numf) |> vote
pred = fit_transform!(pvote,X,Y)
sc=score(:accuracy,pred,Y)
println(sc)
### cross-validate
crossvalidate(pvote,X,Y,"accuracy_score",5)

Print corresponding function call of the pipeline expression

@pipelinex (catf |> ohe) + (numf) |> vote
# outputs: :(Pipeline(ComboPipeline(Pipeline(catf, ohe), numf), vote))

Another pipeline example using the RandomForest learner

# combine the pca, ica, fa of the numerical columns,
# combine them with the hot-bit encoded categorial features
# and feed all to the random forest classifier
prf = @pipeline  (numf |> rb |> pca) + (numf |> rb |> ica) + (catf |> ohe) + (numf |> rb |> fa) |> rf
pred = fit_transform!(prf,X,Y)
score(:accuracy,pred,Y) |> println
crossvalidate(prf,X,Y,"accuracy_score",5)

A pipeline for the Linear Support Vector for Classification

plsvc = @pipeline ((numf |> rb |> pca)+(numf |> rb |> fa)+(numf |> rb |> ica)+(catf |> ohe )) |> lsvc
pred = fit_transform!(plsvc,X,Y)
score(:accuracy,pred,Y) |> println
crossvalidate(plsvc,X,Y,"accuracy_score",5)

Extending AutoMLPipeline

# If you want to add your own filter/transformer/learner, it is trivial. 
# Just take note that filters and transformers expect one input argument 
# while learners expect input and output arguments in the fit! function. 
# transform! function always expect one input argument in all cases. 

# First, import the abstract types and define your own mutable structure 
# as subtype of either Learner or Transformer. Also load the DataFrames package

using DataFrames
import AutoMLPipeline.AbsTypes: fit!, transform!  #for function overloading 

export fit!, transform!, MyFilter

# define your filter structure
mutable struct MyFilter <: Transformer
  variables here....
  function MyFilter()
      ....
  end
end

#define your fit! function. 
# filters and transformer ignore Y argument. 
# learners process both X and Y arguments.
function fit!(fl::MyFilter, X::DataFrame, Y::Vector=Vector())
     ....
end

#define your transform! function
function transform!(fl::MyFilter, X::DataFrame)::DataFrame
     ....
end

# Note that the main data interchange format is a dataframe so transform! 
# output should always be a dataframe as well as the input for fit! and transform!.
# This is necessary so that the pipeline passes the dataframe format consistently to
# its filters/transformers/learners. Once you have this filter, you can use 
# it as part of the pipeline together with the other learners and filters.

Feature Requests and Contributions

We welcome contributions, feature requests, and suggestions. Here is the link to open an issue for any problems you encounter. If you want to contribute, please follow the guidelines in contributors page .

Help usage

Usage questions can be posted in:


以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

C#入门经典

C#入门经典

[美] Karli Watson、Christian Nagel / 齐立波、黄静 / 清华大学出版社 / 2008-12 / 118.00元

这是一本成就无数C#程序员的经典名著,厚而不“重”,可帮助您轻松掌握C#的各种编程知识,为您的职业生涯打下坚实的基础,《C#入门经典》自第1版出版以来,全球销量已经达数万册,在中国也有近8万册的销量,已经成为广大初级C#程序员首选的入门教程,也是目前国内市场上最畅销的C#专业店销书,曾两次被CSDN、《程序员》等机构和读者评选为“最受读者喜爱的十大技术开发类图书”!第4版面向C#2008和.NET......一起来看看 《C#入门经典》 这本书的介绍吧!

HTML 压缩/解压工具
HTML 压缩/解压工具

在线压缩/解压 HTML 代码

图片转BASE64编码
图片转BASE64编码

在线图片转Base64编码工具

Markdown 在线编辑器
Markdown 在线编辑器

Markdown 在线编辑器