Anomaly Detection — Product of Data Refinery

栏目: IT技术 · 发布时间: 4年前

内容简介:"Data is the new Oil." The quote is credited toIt is widely accepted today that any data "refinery" will emit insights in the form of metrics. At eBay, applications emit thousands of metrics from various domains such as Checkouts, Search, Payments, Risk, T

"Data is the new Oil." The quote is credited to Clive Humby , who is believed to have used it first in 2006. 

It is widely accepted today that any data "refinery" will emit insights in the form of metrics. At eBay, applications emit thousands of metrics from various domains such as Checkouts, Search, Payments, Risk, Trust and Shipping to name a few. Maintaining a robust marketplace means systems must be monitored in real time. Essentially, this means tracking the behavior of metrics over time (short/long term) to generate quick and actionable feedback to systems.

However, even with domain focus, the volume of metrics is so high that we need to put systems and processes in place that help call our attention to anomalous events in metrics.

Introduction

In this blog post, we explore a basic introduction to the realm of predictive analytics for metrics in the context of Anomaly Detection based on Models and also present a software design for detecting anomalies in metrics.

More specifically, we explore time series forecasting as an effective way to achieve next step prediction for metrics. Further, this article will focus on metrics that may be represented as a sequence of scalar observations over discrete and regular time periods, a.k.a. univariate time series.

Typically, anomaly detection involves taking historical metric data into consideration, training a model on the data, describing the pattern as a function of historical data points, which is applied in the form of hyper parameters for the model and making a prediction. The prediction is usually in the form of a band of lower value and upper value. On observing a new value in the time series, we identify if the actual value is outside the predicted range and classify the new value as anomalous. Below is a pictorial depiction of this process.

Anomaly Detection — Product of Data Refinery

A time series is comprised of three components: trend, seasonality and noise. A time series may be described additively as

y(t) = Trend + Seasonality + Noise

Alternatively, it may be described multiplicatively as

y(t) = Trend * Seasonality * Noise

Time series forecasting is a vast subject that is continually undergoing research, and new models and methods are being created. In our case, we start with a popular model called ARIMA (Auto Regressive Integrated Moving Average). ARIMA is also referred to sometimes as Box Jenkins model. Most of our metrics exhibit seasonal behavior and in some cases multi-seasonal behavior. As a general rule, we pick a seasonality that is closer to our prediction frequency, since that is the most influencing factor in our seasonal prediction. Examples of such seasonality may be weekly or in the case of intra-day predictions, we pick 24-hour seasonality, assuming that data behaves in similar ways at the same time on any two consecutive days.

Next, we have a choice to introduce an external influence to the data in the form of another time series. We refer to this external influence as an exogenous variable. Put together, the model is known as SARIMAX (Seasonal Auto-Regressive Integrated Moving Average with eXogenous variable support).

Let's have a look at the mathematical representation for ARIMA.

  • AR  is a representation of a data point in terms of time-lagged versions of the point until p points:
    • y= ∅1 yt-1 + ∅2 yt-2 + ∅3 yt-3 … + ∅p yt-p
  • I  represents order of differencing to achieve stationarity
    • Δy= y- yt-d
  • MA  is a representation of past errors that help carry forward lessons learnt from past until q points:
    • y= Θt ∈t-1 + Θ2 ∈t-2 + Θ3 ∈t-3 … +Θq ∈t-q
  • Together:
    • ∆ yt = Σpi=1i ∆d yt-i + Σqj-1 Θj ∆d yt-j
  • SARIMAX is simply a product of ARIMA with non-seasonal hyper parameters and ARIMA with seasonal hyper parameters. For simplicity, we will only provide an abstract representation of SARIMAX below:
    • ARIMA(p,d,q) (non-seasonal) X (P,D,Q)S (seasonal)

Hyper parameter selection and tuning

The hyper parameters of ARIMA are p, d, and q. The seasonal hyper parameters are denoted by P, D, Q. S refers to seasonal period for the time series. In addition to the above, the model also needs another parameter that describes the trend of the time series. Usually, the trend will be a constant or observed to vary linearly.

Before we can fit the model to the series, we must choose the right hyper parameters. We can select the values of the hyper parameters in one of three ways:

  1. Plot the ACF (Auto Correlation Function) and PACF (Partial Auto Correlation Function) and visually determine the values. 
  2. Use Auto-ARIMA, which is a version of ARIMA that automatically determines the best hyper parameters based on AIC (Akaike Information Criterion) and BIC (Bayesian Information Criterion) values.
  3. Grid search the hyper parameters based on evaluating model performance by minimizing a pre-determined error criteria. The error criteria can be one of RMSE (Root Mean Square Error), MAE (Mean Absolute Error), MAPE (Mean Absolute Percentage Error), MASE (Mean Absolute Scaled Error), ME (Mean Error), MPE (Mean Percentage Error).

In our implementation, we selected option 3 for selecting hyper parameters to avoid erroneous and subjective interpretations of ACF and PACF. The statistical measure, which we aimed to minimize the error of, is the mean, which is also the measure we output as part of the predicted values. These are also referred to as forecasts. (Here is a great article describing grid searching for SARIMA model: https://machinelearningmastery.com/how-to-grid-search-sarima-model-hyperparameters-for-time-series-forecasting-in-python/ .)

In addition, we also needed to decide at confidence interval at which the predicted mean needs to be output. In our case, we chose a 95% CI.

What is an anomaly?

Once the model is deployed, it outputs the forecasted values as a band, bound by the upper CI and lower CI. The forecasted values are compared with the actual values as they arrive in real time. A data point is said to be an anomaly if the actual value is observed to be outside the CI band.

Example

Let's consider a real-world metric and apply what we've learned so far. The metric we are using as an example represents counts at daily intervals of time. Below is a snapshot of the last 15 days of data. However, the more historical data we have leads to a more accurate prediction.

Anomaly Detection — Product of Data Refinery

Decomposing the time series into its constituents of Trend, Seasonality and Noise, we can see that the time series has weekly seasonality, which is useful while grid searching for hyper parameters.

Anomaly Detection — Product of Data Refinery

Feeding the time series to the grid searching routine to estimate hyper parameters, we get the following output. The output consists of 3 top models given by the lowest RMSE for mean out of thousands of combinations in the search space. The best performing model has

p,d,q= 1,0,1

P,D,Q= 2,1,0,7 

Trend= n, no explicit trend defined

("[(1, 0, 1), (2, 1, 0, 7), 'n']", 9221.353200844014)

("[(2, 0, 0), (2, 1, 0, 7), 'n']", 9280.010864427197)

("[(1, 1, 0), (2, 1, 0, 7), 'n']", 9280.970349459463)

Over a period of time as the time series collects more historical data, we'll need to re-perform grid search to tune the hyper parameters and accommodate new data behavior.

Anomaly detection process flow

The entire process flow is described as a sequence of steps shown in the following diagram.

Anomaly Detection — Product of Data Refinery

The following line chart compares how actual values of the metrics have moved over time in relation to a forecasted range defined in terms of upper CI and lower CI. We can see that for most of the duration, the actual values are mostly within the forecasted range, which means our model has been forecasting accurately. Towards the end of the timeline, we do see potential anomalies that have been investigated and classified as true or false anomalies appropriately.

Anomaly Detection — Product of Data Refinery

Design

Now that all the details of the model are in place, let's explore the engineering design to deploy in production. We'll also look at how the design and deployment is scaled for more metrics. Below is a simple design showing the logical components at play.

Anomaly Detection — Product of Data Refinery

  1. The four key blocks represent the Data Source, Data Processing, Data Sink and Data Visualization.
  2. Data Sources may be comprised of a variety of databases that ideally support fast aggregation queries. For intra-day anomaly detection, we expect to have access to real-time data.
  3. Data Processing involves the following functional blocks
    • The Scheduler issues batch triggers to drive the Query Manager. Batch triggers are configured for frequencies at which we need to predict metric values, e.g Daily, Hourly, Three-Hourly.
    • The Query Manager manages the selection and execution of predefined aggregated queries specific to metrics.
    • A good software design support a degree of configurability. The Hyper Parameters and Alerts are externalized into a configuration file.
    • The Model Executor reads the configuration for metrics and generates the model and the next-step forecast for the metrics.
    • Forecast comparison and Alerts compares the forecasts with the actual values, and if the actual value of the metric is outside the Confidence Interval band of the predictions, a notification is sent based on the alert configuration.
  4. The predicted values and the actual values are useful to understand how the model has been performing and therefore, we store the data back into a Data Sink, in this case Elastic Search.
  5. Data Visualization is a key component with numerous dashboards with graphics representing raw data to be compared with anomaly data. This allows us to understand how actual values are moving in reference to forecasted values.

Physically, the components of the Data Processing block reside inside a pod in a Tess cluster. Tess.io is eBay's unified cloud infrastructure based on Kubernetes. Multiple pods are then deployed on the cluster with each pod containing Docker containers for various services. It is recommended that a pod host metrics pertaining to a specific domain such as Risk, Payments, and so on offering containment of failures at pod levels. For fault tolerance, we recommend deploying pods into several regions/DCs in Active-Passive configuration.

Physical architecture

Anomaly Detection — Product of Data Refinery

Handling anomalies

A false anomaly is one that is essentially a false positive. Even though the actual value was outside the forecasted CI range, the value is attributed to a known/valid reason. In case of a false anomaly, we'd need to let the model learn from this behavior so that the particular false positive may be accounted for for further forecasting. In other words, we continue to use the actual value in further forecasting.

A true anomaly is one that is essentially unexpected and we'll need to investigate further to attribute the right reasons to the anomaly. In this case, we need to prevent the model from learning this behavior since we successfully detected an anomaly. Hence, we replace the actual value with the predicted value, thereby ensuring that a similar data point deviation will continue to be flagged as anomalous for predictions.

Conclusion

Time series forecasting is a complex subject with new research and methodologies being invented regularly. This article presents ARIMA as one of the more basic models in practice that allows for quickly generating returns. It is worth noting that ARIMA is technically a statistical model as opposed to a pure ML model. Additionally, metrics with low dimensionality and low cardinality are best suited to deploy ARIMA model. For metrics with high dimensionality and high cardinality of dimensions, anomaly detection scales better with the use of a model closer to pure Machine Learning deployed as a Recurrent Neural Network.

References

  1. https://otexts.com/fpp2/seasonal-arima.html
  2. https://machinelearningmastery.com/how-to-grid-search-sarima-model-hyperparameters-for-time-series-forecasting-in-python/
  3. https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html
  4. https://people.duke.edu/~rnau/411sdif.html 
  5. https://kubernetes.io/docs/tutorials/#basics
  6. https://docs.docker.com/

以上所述就是小编给大家介绍的《Anomaly Detection — Product of Data Refinery》,希望对大家有所帮助,如果大家有任何疑问请给我留言,小编会及时回复大家的。在此也非常感谢大家对 码农网 的支持!

查看所有标签

猜你喜欢:

本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们

Advanced Web Metrics with Google Analytics

Advanced Web Metrics with Google Analytics

Brian Clifton / Sybex / 2008 / USD 39.99

Are you getting the most out of your website? Google insider and web metrics expert Brian Clifton reveals the information you need to get a true picture of your site's impact and stay competitive usin......一起来看看 《Advanced Web Metrics with Google Analytics》 这本书的介绍吧!

HTML 压缩/解压工具
HTML 压缩/解压工具

在线压缩/解压 HTML 代码

在线进制转换器
在线进制转换器

各进制数互转换器

XML、JSON 在线转换
XML、JSON 在线转换

在线XML、JSON转换工具