Contextual and Drift Anomalies

UX design best practices for detecting and displaying Contextual Anomalies with AI methods

In the previous installment, we covered Point and Change Point Anomalies. Recall that these “amplitude” anomalies were formed when a value of a variable of interest breached some static or dynamic numerical threshold. 

In this installment, we will tackle the more complex and interesting Contextual and Drift Anomalies. These all fall into a family of “shape” anomalies, e.g., rather than analyzing the value of a variable, we analyze the shape the line makes and determine anomalous behavior based on that. Sometimes, there is a way to describe complex periodically occurring shapes mathematically, but often shape anomaly detection is done with AI/ML methods, which typically work quite well for the task. (Here’s a nice in-depth discussion of different algorithms, should you wish to dig into some juicy examples: https://antonsruberts.github.io/anomaly-detection-web/) 

Recall Andrew Maguire’s time series (read: line graphs) Anomalies chart:

What use cases are addressed through detecting shape anomalies?

While there are many thousands of use cases, here are a few that the authors are familiar with more intimately:

  1. Unseasonable Traffic Spike: If a website gets the expected amount of traffic during working hours Monday-Friday, and suddenly there is too little or too much traffic that does not match expected behavior for a particular time of day or day of the week, that is a Contextual Anomaly related to weekly or hourly seasonality. CAs can signal security breaches (traffic during unexpected times, unexpected volume of traffic from unexpected countries/IP Addresses, unexpected locations of admin traffic, etc.)

  2. Unseasonable Traffic Drop: If your “Add to Cart” button is broken and the expected sales suddenly drop, or a badly configured discount code reduces the price to $0, the revenue drops precipitously. These bugs have a tangible impact on the business and can be quite costly.

  3. Slow Drift: sometimes, the trend is not a sudden change but a slow, gradual drift over time. These types of anomalies are devilishly difficult to detect. For example, if your online traffic keeps increasing over time. It’s not a bad thing, but it is an anomaly that might warrant an action, like increasing the computing capacity of your web server. 

  4. Machine Vibration Anomalies: When a complex machine (engine, pump, turbine, airplane wing, factory equipment, etc.) is operating, it vibrates a certain way. That vibration can be measured using a Dynamometer (a device that measures force) over time. A trace of vibration over time has very distinctive peaks and throughs, which can be used to detect deviation from a “healthy” or expected “shape” of vibration. For example, anyone who owns a car and a functional eardrum can tell you that a cold car on a cold sub-zero-degree morning will vibrate quite differently from a warmed-up and well-running vehicle. Likewise, an astute car owner can usually detect weird noises, knocking, etc., which are a result of a change in a pattern and frequency of vibration and can be used to detect anomalies and predict if a piece of equipment is about to break down. In some cases, Contextual Vibration Anomalies can even be used to predict the nature of breakdown and estimate the time to failure.

Keep in mind that while your industry might be different than those described above, learning the UI best practices for these kinds of anomalies will enable you to discover existing (or invent novel) Shape Anomaly use cases that apply to your specific product or industry.

Contextual Anomaly based on Seasonality

Below is an excellent example of seasonal shape anomaly detection from Jepto. It’s a clean and straightforward interface for configuring the anomaly algorithm:

The main two configurable parameters are:

  1. Time Period (Daily) – determines the time frame of periodicity that the algorithm will try to model

  2. Direction (Both) – which is to say whether the anomaly will be generated if the value surpasses the expectation or underwhelms it.

Both of these should be pretty straightforward to understand if you have been reading our previous installments on this topic (See Forecasting with Line Graphs: a Definitive Guide for Serious UX for AI Practitioners, Part 3).

Note also another feature we mentioned previously: the preview graph of the past time periods that shows how many anomalies would have been generated from the past data if these settings had been in effect. This preview pane is very useful because it avoids setting up the system incorrectly and creating too many false positives (normal readings that are misinterpreted as anomalies) or false negatives (where an anomaly should have happened but was not reported.)

Most of the interesting settings are hidden under the Advanced pane because, most of the time, most users will just trust the basic algorithm settings to do a good job: 

If you are working with contextual anomalies, you might want to read the entire documentation to understand these settings because they are quite sophisticated: https://www.jepto.com/help/anomaly-detection-settings 

In the TL/DR version, the most important setting is the Threshold of Positive Anomalies. Essentially it determines what percentage of the data can be specified as anomalies. This metric adjusts the sensitivity of the system. As the saying in design goes, “If everything is bold, nothing is.” the same applies to anomalies. (NOTE: This metric is also related to a concept called Processing Capacity: how many anomalies can a team really look at in a reasonable amount of time, more on this in later editions of the newsletter.)

Opportunities for Improvement

While most folks would agree that this UI is clean, well-executed, straightforward, and provides a good amount of contextual help, the concepts the form manipulates are fairly sophisticated. There are many areas where this UI can be improved using best practices that can be gleaned from UX for AI human factors studies. While your mileage may differ, here’s a short list of problems together with suggested solutions to try for your next UX for AI project:

  1. Only a single periodicity has to be chosen. In the current UI, the user must pick the period: daily, weekly, monthly, yearly, etc. However, we already discussed that most periodicity happens as a combination of these: during the day, there is low demand at 2-6 am, for instance (daily pattern), then the typical decreased demand on weekends (weekly pattern). Finally, during holidays in November/December, the traffic spikes (yearly pattern). Picking only a single periodicity, such as “weekly,” misses the opportunity to model these more complex cycles and identify important anomalies (such as missed traffic spikes at 3 am) while also generating many false positives during the Holidays when the traffic is high. One solution is that rather than having the user pick one periodicity, the Ideal UI should instead automatically suggest an anomaly schedule that best fits the available data: likely a combination of daily, weekly, and yearly trends. 

  2. The algorithm is not self-balancing. This is important because during certain times, like seasonal high shopping demand, for example, the data will be noisy and out of balance, generating many false positives. While there are settings in the UI to help ameliorate this, they need to be understood and then set manually. To remedy the problem, consider having a setting that automatically adjusts based on an unusually high number of anomalies. If this has to be user-driven, it can be placed closer to the source of the problem, such as a simple question that is triggered from the alert, for example: 

    [ ] Getting too many alerts on this metric? Check this box to adjust anomaly levels automatically based on recent data.

  3. The sensitivity drop-down is hidden. As we discussed above, the most important and useful setting is the Threshold of Positive Anomalies. Unfortunately, it is largely hidden from view under the Advanced Settings. This is partially to protect the user from the hard-to-understand values in the drop-down.  

  4. The sensitivity drop-down is confusing. The options in the sensitivity drop-down are somewhat confusing. According to the documentation, they are: 99th Percentile, 95th Percentile, Median of the Data Max Values, and Default Setting for Digital Marketing Data. I think most folks will have trouble understanding these options. Instead, consider having a simple slider like this one:
    Number of Anomalies —----------/\-------------  [X] adjust automatically
    Where the checkbox is the default “auto” adoptive setting that fits the data, and the slider is there in case the user wants to adjust the number of anomalies and is much more intuitive than a percentile drop-down. Drag right to detect more anomalies, left to decrease sensitivity, and detect fewer anomalies.

  5. Add the anomalies recommendations in the co-pilot. In addition to direct settings in the UI, the additional co-pilot chat functionality can be used to provide guidance on the settings. This has the advantage of being able to adjust settings recommendations based on historical data and the number of anomalies coming in while also being able to answer specific documentation questions and play out “what if” scenarios. 

The most important takeaway bears repeating: as we discussed in the Forecasting with Line Graphs: a Definitive Guide for Serious UX for AI Practitioners, Part 3, typical periodicity for an e-commerce site is a combination of hourly (less traffic at night), weekly (less traffic on Saturday and Sunday), and yearly (more traffic right before the holidays and typical sales times). Thus AI/ML tools will do much better than typical algorithms because they are based on direct training of the models using previous historical data and not on algorithmic predictions. Identifying the correct combination of periodicity and then using it to identify contextual anomalies accurately can be an excellent application of AI technology.

Make sure your solution scales

Finally, while a manual single-setting UI seems appropriate for configuring one or two anomalies, it doesn't scale readily. Recall that all of these settings will likely need to be applied differently across hundreds, if not thousands, of metrics. Automation and self-balancing will likely be the key. Carefully consider use cases and make your UX design recommendations appropriately to the scale of the task and education level of your customer audience.

Happy Anomaly Detecting!

Greg & Daria

Reply

or to participate.