内容简介:Machine learning is a field of study and is concerned with algorithms that learn from examples.Classification is a task that requires the use of machine learning algorithms that learn how to assign a class label to examples from the problem domain. An easy
Machine learning is a field of study and is concerned with algorithms that learn from examples.
Classification is a task that requires the use of machine learning algorithms that learn how to assign a class label to examples from the problem domain. An easy to understand example is classifying emails as “ spam ” or “ not spam .”
There are many different types of classification tasks that you may encounter in machine learning and specialized approaches to modeling that may be used for each.
In this tutorial, you will discover different types of classification predictive modeling in machine learning.
After completing this tutorial, you will know:
- Classification predictive modeling involves assigning a class label to input examples.
- Binary classification refers to predicting one of two classes and multi-class classification involves predicting one of more than two classes.
- Multi-label classification involves predicting one or more classes for each example and imbalanced classification refers to classification tasks where the distribution of examples across the classes is not equal.
Let’s get started.
Types of Classification in Machine Learning
Photo by Rachael , some rights reserved.
Tutorial Overview
This tutorial is divided into five parts; they are:
- Classification Predictive Modeling
- Binary Classification
- Multi-Class Classification
- Multi-Label Classification
- Imbalanced Classification
Classification Predictive Modeling
In machine learning, classification refers to a predictive modeling problem where a class label is predicted for a given example of input data.
Examples of classification problems include:
- Given an example, classify if it is spam or not.
- Given a handwritten character, classify it as one of the known characters.
- Given recent user behavior, classify as churn or not.
From a modeling perspective, classification requires a training dataset with many examples of inputs and outputs from which to learn.
A model will use the training dataset and will calculate how to best map examples of input data to specific class labels. As such, the training dataset must be sufficiently representative of the problem and have many examples of each class label.
Class labels are often string values, e.g. “ spam ,” “ not spam ,” and must be mapped to numeric values before being provided to an algorithm for modeling. This is often referred to aslabel encoding, where a unique integer is assigned to each class label, e.g. “ spam ” = 0, “ no spam ” = 1.
There are many different types of classification algorithms for modeling classification predictive modeling problems.
There is no good theory on how to map algorithms onto problem types; instead, it is generally recommended that a practitioner use controlled experiments and discover which algorithm and algorithm configuration results in the best performance for a given classification task.
Classification predictive modeling algorithms are evaluated based on their results. Classification accuracy is a popular metric used to evaluate the performance of a model based on the predicted class labels. Classification accuracy is not perfect but is a good starting point for many classification tasks.
Instead of class labels, some tasks may require the prediction of a probability of class membership for each example. This provides additional uncertainty in the prediction that an application or user can then interpret. A popular diagnostic for evaluating predicted probabilities is theROC Curve.
There are perhaps four main types of classification tasks that you may encounter; they are:
- Binary Classification
- Multi-Class Classification
- Multi-Label Classification
- Imbalanced Classification
Let’s take a closer look at each in turn.
Binary Classification
Binary classification refers to those classification tasks that have two class labels.
Examples include:
- Email spam detection (spam or not).
- Churn prediction (churn or not).
- Conversion prediction (buy or not).
Typically, binary classification tasks involve one class that is the normal state and another class that is the abnormal state.
For example “ not spam ” is the normal state and “ spam ” is the abnormal state. Another example is “ cancer not detected ” is the normal state of a task that involves a medical test and “ cancer detected ” is the abnormal state.
The class for the normal state is assigned the class label 0 and the class with the abnormal state is assigned the class label 1.
It is common to model a binary classification task with a model that predicts a Bernoulli probability distribution for each example.
The Bernoulli distribution is a discrete probability distribution that covers a case where an event will have a binary outcome as either a 0 or 1. For classification, this means that the model predicts a probability of an example belonging to class 1, or the abnormal state.
Popular algorithms that can be used for binary classification include:
- Logistic Regression
- k-Nearest Neighbors
- Decision Trees
- Support Vector Machine
- Naive Bayes
Some algorithms are specifically designed for binary classification and do not natively support more than two classes; examples include Logistic Regression and Support Vector Machines.
Next, let’s take a closer look at a dataset to develop an intuition for binary classification problems.
We can use the make_blobs() function to generate a synthetic binary classification dataset.
The example below generates a dataset with 1,000 examples that belong to one of two classes, each with two input features.
# example of binary classification task from numpy import where from collections import Counter from sklearn.datasets import make_blobs from matplotlib import pyplot # define dataset X, y = make_blobs(n_samples=1000, centers=2, random_state=1) # summarize dataset shape print(X.shape, y.shape) # summarize observations by class label counter = Counter(y) print(counter) # summarize first few examples for i in range(10): print(X[i], y[i]) # plot the dataset and color the by class label for label, _ in counter.items(): row_ix = where(y == label)[0] pyplot.scatter(X[row_ix, 0], X[row_ix, 1], label=str(label)) pyplot.legend() pyplot.show()
Running the example first summarizes the created dataset showing the 1,000 examples divided into input ( X ) and output ( y ) elements.
The distribution of the class labels is then summarized, showing that instances belong to either class 0 or class 1 and that there are 500 examples in each class.
Next, the first 10 examples in the dataset are summarized, showing the input values are numeric and the target values are integers that represent the class membership.
(1000, 2) (1000,) Counter({0: 500, 1: 500}) [-3.05837272 4.48825769] 0 [-8.60973869 -3.72714879] 1 [1.37129721 5.23107449] 0 [-9.33917563 -2.9544469 ] 1 [-11.57178593 -3.85275513] 1 [-11.42257341 -4.85679127] 1 [-10.44518578 -3.76476563] 1 [-10.44603561 -3.26065964] 1 [-0.61947075 3.48804983] 0 [-10.91115591 -4.5772537 ] 1
Finally, a scatter plot is created for the input variables in the dataset and the points are colored based on their class value.
We can see two distinct clusters that we might expect would be easy to discriminate.
Scatter Plot of Binary Classification Dataset
Multi-Class Classification
Multi-class classification refers to those classification tasks that have more than two class labels.
Examples include:
- Face classification.
- Plant species classification.
- Optical character recognition.
Unlike binary classification, multi-class classification does not have the notion of normal and abnormal outcomes. Instead, examples are classified as belonging to one among a range of known classes.
The number of class labels may be very large on some problems. For example, a model may predict a photo as belonging to one among thousands or tens of thousands of faces in a face recognition system.
Problems that involve predicting a sequence of words, such as text translation models, may also be considered a special type of multi-class classification. Each word in the sequence of words to be predicted involves a multi-class classification where the size of the vocabulary defines the number of possible classes that may be predicted and could be tens or hundreds of thousands of words in size.
It is common to model a multi-class classification task with a model that predicts a Multinoulli probability distribution for each example.
The Multinoulli distribution is a discrete probability distribution that covers a case where an event will have a categorical outcome, e.g. K in {1, 2, 3, …, K }. For classification, this means that the model predicts the probability of an example belonging to each class label.
Many algorithms used for binary classification can be used for multi-class classification.
Popular algorithms that can be used for multi-class classification include:
- k-Nearest Neighbors.
- Decision Trees.
- Naive Bayes.
- Random Forest.
- Gradient Boosting.
Algorithms that are designed for binary classification can be adapted for use for multi-class problems.
This involves using a strategy of fitting multiple binary classification models for each class vs. all other classes (called one-vs-rest) or one model for each pair of classes (called one-vs-one).
- One-vs-Rest : Fit one binary classification model for each class vs. all other classes.
- One-vs-One : Fit one binary classification model for each pair of classes.
Binary classification algorithms that can use these strategies for multi-class classification include:
- Logistic Regression.
- Support Vector Machine.
Next, let’s take a closer look at a dataset to develop an intuition for multi-class classification problems.
We can use the make_blobs() function to generate a synthetic multi-class classification dataset.
The example below generates a dataset with 1,000 examples that belong to one of three classes, each with two input features.
# example of multi-class classification task from numpy import where from collections import Counter from sklearn.datasets import make_blobs from matplotlib import pyplot # define dataset X, y = make_blobs(n_samples=1000, centers=3, random_state=1) # summarize dataset shape print(X.shape, y.shape) # summarize observations by class label counter = Counter(y) print(counter) # summarize first few examples for i in range(10): print(X[i], y[i]) # plot the dataset and color the by class label for label, _ in counter.items(): row_ix = where(y == label)[0] pyplot.scatter(X[row_ix, 0], X[row_ix, 1], label=str(label)) pyplot.legend() pyplot.show()
Running the example first summarizes the created dataset showing the 1,000 examples divided into input ( X ) and output ( y ) elements.
The distribution of the class labels is then summarized, showing that instances belong to class 0, class 1, or class 2 and that there are approximately 333 examples in each class.
Next, the first 10 examples in the dataset are summarized showing the input values are numeric and the target values are integers that represent the class membership.
(1000, 2) (1000,) Counter({0: 334, 1: 333, 2: 333}) [-3.05837272 4.48825769] 0 [-8.60973869 -3.72714879] 1 [1.37129721 5.23107449] 0 [-9.33917563 -2.9544469 ] 1 [-8.63895561 -8.05263469] 2 [-8.48974309 -9.05667083] 2 [-7.51235546 -7.96464519] 2 [-7.51320529 -7.46053919] 2 [-0.61947075 3.48804983] 0 [-10.91115591 -4.5772537 ] 1
Finally, a scatter plot is created for the input variables in the dataset and the points are colored based on their class value.
We can see three distinct clusters that we might expect would be easy to discriminate.
Scatter Plot of Multi-Class Classification Dataset
Multi-Label Classification
Multi-label classification refers to those classification tasks that have two or more class labels, where one or more class labels may be predicted for each example.
Consider the example ofphoto classification, where a given photo may have multiple objects in the scene and a model may predict the presence of multiple known objects in the photo, such as “ bicycle ,” “ apple ,” “ person ,” etc.
This is unlike binary classification and multi-class classification, where a single class label is predicted for each example.
It is common to model multi-label classification tasks with a model that predicts multiple outputs, with each output taking predicted as a Bernoulli probability distribution. This is essentially a model that makes multiple binary classification predictions for each example.
Classification algorithms used for binary or multi-class classification cannot be used directly for multi-label classification. Specialized versions of standard classification algorithms can be used, so-called multi-label versions of the algorithms, including:
- Multi-label Decision Trees
- Multi-label Random Forests
- Multi-label Gradient Boosting
Another approach is to use a separate classification algorithm to predict the labels for each class.
Next, let’s take a closer look at a dataset to develop an intuition for multi-label classification problems.
We can use the make_multilabel_classification() function to generate a synthetic multi-label classification dataset.
The example below generates a dataset with 1,000 examples, each with two input features. There are three classes, each of which may take on one of two labels (0 or 1).
# example of a multi-label classification task from sklearn.datasets import make_multilabel_classification # define dataset X, y = make_multilabel_classification(n_samples=1000, n_features=2, n_classes=3, n_labels=2, random_state=1) # summarize dataset shape print(X.shape, y.shape) # summarize first few examples for i in range(10): print(X[i], y[i])
Running the example first summarizes the created dataset showing the 1,000 examples divided into input ( X ) and output ( y ) elements.
Next, the first 10 examples in the dataset are summarized showing the input values are numeric and the target values are integers that represent the class label membership.
(1000, 2) (1000, 3) [18. 35.] [1 1 1] [22. 33.] [1 1 1] [26. 36.] [1 1 1] [24. 28.] [1 1 0] [23. 27.] [1 1 0] [15. 31.] [0 1 0] [20. 37.] [0 1 0] [18. 31.] [1 1 1] [29. 27.] [1 0 0] [29. 28.] [1 1 0]
Imbalanced Classification
Imbalanced classification refers to classification tasks where the number of examples in each class is unequally distributed.
Typically, imbalanced classification tasks are binary classification tasks where the majority of examples in the training dataset belong to the normal class and a minority of examples belong to the abnormal class.
Examples include:
- Fraud detection.
- Outlier detection.
- Medical diagnostic tests.
These problems are modeled as binary classification tasks, although may require specialized techniques.
Specialized techniques may be used to change the composition of samples in the training dataset by undersampling the majority class or oversampling the majority class.
Examples include:
Specialized modeling algorithms may be used that pay more attention to the minority class when fitting the model on the training dataset, such as cost-sensitive machine learning algorithms.
Examples include:
- Cost-sensitive Logistic Regression .
- Cost-sensitive Decision Trees.
- Cost-sensitive Support Vector Machines.
Finally, alternative performance metrics may be required as reporting the classification accuracy may be misleading.
Examples include:
- Precision.
- Recall.
- F-Measure.
Next, let’s take a closer look at a dataset to develop an intuition for imbalanced classification problems.
We can use the make_classification() function to generate a synthetic imbalanced binary classification dataset.
The example below generates a dataset with 1,000 examples that belong to one of two classes, each with two input features.
# example of an imbalanced binary classification task from numpy import where from collections import Counter from sklearn.datasets import make_classification from matplotlib import pyplot # define dataset X, y = make_classification(n_samples=1000, n_features=2, n_informative=2, n_redundant=0, n_classes=2, n_clusters_per_class=1, weights=[0.99,0.01], random_state=1) # summarize dataset shape print(X.shape, y.shape) # summarize observations by class label counter = Counter(y) print(counter) # summarize first few examples for i in range(10): print(X[i], y[i]) # plot the dataset and color the by class label for label, _ in counter.items(): row_ix = where(y == label)[0] pyplot.scatter(X[row_ix, 0], X[row_ix, 1], label=str(label)) pyplot.legend() pyplot.show()
Running the example first summarizes the created dataset showing the 1,000 examples divided into input ( X ) and output ( y ) elements.
The distribution of the class labels is then summarized, showing the severe class imbalance with about 980 examples belonging to class 0 and about 20 examples belonging to class 1.
Next, the first 10 examples in the dataset are summarized showing the input values are numeric and the target values are integers that represent the class membership. In this case, we can see that most examples belong to class 0, as we expect.
(1000, 2) (1000,) Counter({0: 983, 1: 17}) [0.86924745 1.18613612] 0 [1.55110839 1.81032905] 0 [1.29361936 1.01094607] 0 [1.11988947 1.63251786] 0 [1.04235568 1.12152929] 0 [1.18114858 0.92397607] 0 [1.1365562 1.17652556] 0 [0.46291729 0.72924998] 0 [0.18315826 1.07141766] 0 [0.32411648 0.53515376] 0
Finally, a scatter plot is created for the input variables in the dataset and the points are colored based on their class value.
We can see one main cluster for examples that belong to class 0 and a few scattered examples that belong to class 1. The intuition is that datasets with this property of imbalanced class labels are more challenging to model.
Scatter Plot of Imbalanced Binary Classification Dataset
Further Reading
This section provides more resources on the topic if you are looking to go deeper.
- Statistical classification, Wikipedia .
- Binary classification, Wikipedia .
- Multiclass classification, Wikipedia .
- Multi-label classification, Wikipedia .
- Multiclass and multilabel algorithms, scikit-learn API .
Summary
In this tutorial, you discovered different types of classification predictive modeling in machine learning.
Specifically, you learned:
- Classification predictive modeling involves assigning a class label to input examples.
- Binary classification refers to predicting one of two classes and multi-class classification involves predicting one of more than two classes.
- Multi-label classification involves predicting one or more classes for each example and imbalanced classification refers to classification tasks where the distribution of examples across the classes is not equal.
Do you have any questions?
Ask your questions in the comments below and I will do my best to answer.
Discover Fast Machine Learning in Python!
Develop Your Own Models in Minutes
...with just a few lines of scikit-learn code
Learn how in my new Ebook:
Machine Learning Mastery With PythonCovers self-study tutorials and end-to-end projects like:
Loading data , visualization , modeling , tuning , and much more...
Finally Bring Machine Learning To
Your Own Projects
Skip the Academics. Just Results.
See What's Inside以上就是本文的全部内容,希望本文的内容对大家的学习或者工作能带来一定的帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。
Linux集群体系结构
Alex Vrenios / 马朝晖 / 机械工业出版社 / 2003-1 / 38.00元
本书对Linux集群体系结构的硬件环境组建与其软件开发作了深入细致的介绍。通过阅读本书,可以对Linux集群体系结构有深入的认识,掌握并了解如何设计和构造集群计算机。本书使你了解到开发项目可能遇到的问题,并掌握测试和调整分布式算法。 本书适合计算机系统集成技术人员、管理人员和计算机科研人员作为参考。一起来看看 《Linux集群体系结构》 这本书的介绍吧!
HTML 压缩/解压工具
在线压缩/解压 HTML 代码
UNIX 时间戳转换
UNIX 时间戳转换