- UX for AI
- Posts
- Line graph: a Definitive Guide for Serious UX for AI Practitioners (Part 2 of 3)

# Line graph: a Definitive Guide for Serious UX for AI Practitioners (Part 2 of 3)

### Let’s continue our last week’s installment with the ways things often go wrong with the Line Graph. Even seasoned practitioners sometimes make the following mistakes:

# Don’ts

Let’s continue our last week’s installment, https://www.uxforai.com/p/line-chart-definitive-guide-part-1 with the ways things often go wrong with the Line Graph. Even seasoned practitioners sometimes make the following mistakes:

## 1. Using line graphs for aggregate variables.

Line graphs are ideally suited for continuous variables like temperature. The ideal variable is something like body temperature: even when you do not measure the temperature, it’s still there. So by making measurements every few minutes, you can pretty well extrapolate the values between the measurements (see point #3 below).

Trouble arises when designers use line graphs for **aggregate** variables such as sales values or stock trade volumes. Line graphs are not suitable for showing aggregate values; bar charts are much better. While temperature is there in the environment whether you measure it or not, a volume (trades, sales, etc.) is an **aggregate variable, measured over a chunk of time**, like daily or hourly, for example. So while a line chart is a good choice for displaying a stock price, a bar chart is a much better choice for displaying volume because volume is measured on an agregate daily/hourly/monthly basis:

A common example of an aggregate metric in AI systems is the number of errors per unit of time, often sloppily displayed as a line graph. A line graph is not the right choice here, as a number of errors is a measurement of the total occurrence of an event during a unit of time, one hour, one minute, one day, etc. You might get zero errors one minute and 100 errors the next. There is very little innate connection here from one point on the X-axis to the next. This means it’s best expressed as a bar chart, not a line graph. Note the weird up-down “wiggle” in the data is an “artifact” – a false pattern in the graph introduced solely becuase the counts are being aggregated at 30-second intervals. That line would be smooth if the counts would have been 1 minute apart:

Hint: often, the giveaway for an incorrect use of a line chart is the word **count**: did you count whole discrete numbers to arrive at the number you are showing? If you were to split the x-axis into a different set of buckets, would the line change shape? If so, that should be your clue not to use a line chart – reach for a bar chart instead!

## 2. Using line graphs to show frequency distributions.

Frequency distributions are another bad choice for line graphs. A frequency distribution is basically a table that displays the number of occurrences or frequency of each unique value in a dataset. It organizes data into classes or intervals and shows how many data points fall into each class.

For example, if you have a dataset of test scores for a class, and you create a frequency distribution to show how many students scored within different score ranges (e.g., 0-10, 11-20, etc.). The frequency distribution would display the count of students in each score range. Frequency distributions make a bad choice for line graphs because changing the buckets would change the frequency. For example, that same class “graded on a curve” could be plotted with a grade frequency distribution like the number of students who got 0-60 (F), 60-70 (D), 70-72 (C), 72-85 (B), and 85-100 (A). Plotting those numbers on a line graph is non-sensical, as buckets are deliberately chosen around the number of students. This bar chart below is fine, but can you imagine it as a line graph? It just will not work:

Probability graphs are another example of a frequency distribution frequently used in AI systems. They likewise make a bad choice for line graphs. A P90 graph like the one shown below essentially says that a certain percentage of the values falls within a particular x-axis **interval**. For example, here is the probability that a transaction like “Add to Cart” would take a certain amount of time. A “P90” plot means that 90% of those transactions will fall somewhere between the Min and Max of the X axis on the graph. Again, try to imagine this being plotted using a line graph:

Using a line chart to show this would imply a continuous interval, but in real life often there are no data points in a specific bucket. As a result, using a line graph is misleading and would create weird non-sensical peaks and troughs.

Here’s an example that brings to mind the maxim:

“Two points plot a line, three a curve, and four plot an elephant”

Here’s more on Von Neumann's elephant: https://en.wikipedia.org/wiki/Von_Neumann%27s_elephant

This is why I do not recommend using line charts for probability distributions. Again, just as we did with the aggregate variables, if you hear the word “count,” reach for a bar chart!

## 3. Over-extrapolating when there no measurement is detected.

Over-extrapolation is a pitfal that happens often in the IoT or monitoring space where a certain metric like CPU Busy Percent was not being collected due to bad sensor or communication disruption. Often in those cases, designers will simply connect the beginning and the end of the line spanning the missing period. Simply connecting across multiple missing data points is not recommended. Extrapolation beyond 2-3 missing data points will result in false data reporting. If there are many data points missing, simply put a break in the line – do not extrapolate, as this leads to a false story of what has happened to the system you’ve been observing.

To see why this is important, consider this slightly silly example: a doctor measures a patient’s temperature every day while the patient is healthy. The patient then gets COVID and his fever spikes. The doctor is afraid of catching COVID, so she does not enter the patient’s room for one week. After one week, the patient tests negative, so the doctor resumes measuring temperature:

Source: Greg Nudelman

Most people simply extrapolate, filling in the missing data, which will completely miss the sickness period with high fever, which would have been detected had the doctor been brave enough to measure it. Do not simply extrapolate line graphs without first confirming that the data is correct. There is no rule that says a line must be continuous! You can avoid incorrect reporting by simply leaving the unknown data points blank.

## 4. Forming a line graph by connecting unrelated data points.

The canonical example of this “error in judgment” comes from the classic book *Information Dashboard Design* by Stephen Few. The example includes connecting the sales readings from four company divisions, N, S, E, and W. Such a connection is nonsensical. However, you’d be surprised just how often even seasoned designers make this mistake. This is especially common when multiple readings from similar but separate devices are obtained. These should **never **be connected and formed into a line graph:

## 5. Padding Max and Min on the Y axis.

Often designers fail to provide requirements for a Max and Min on the line graph, so Min and Max are generated automatically by the plotting library. That is a missed opportunity! Min and Max are important as they help the readers of the graph understand what they are reading. Ideally, Max and Min should be determined by the dataset itself. If there are multiple graphs, they should be the global Max and Min of the entire set (e.g., if the highest temperature detected is 104F and the lowest is 55F, use those values as the min and max values for the Y axis.) **There is no need at all to pad the values!** In fact, best practices dictate that the user will then be able to read the minimum and maximum values right off the chart, a wonderful boon as described by the incomparable Edward Tufte in his book *Qualitative Information:*

This is an excellent demonstration of the Min and Max shown directly on both axis. However, for the reasons already mentioned, I would have preferred that this be a bar chart instead.

One exception to this rule is percent. Those values should *almost* always be 0-100% on the Y-axis, as the whole point of the percent Y-axis is to normalize the data. Most often, showing any interval other than 0-100% on the Y-axis would lead to over-zooming, which is discussed in the next section. Meanwhile, here’s a percent graph that tells an excellent story and provides a spectacular **exception** to the 0-100% rule:

I encourage you to dig deeply into the topic, so here’s that link again: https://digitalblog.ons.gov.uk/2016/06/27/does-the-axis-have-to-start-at-zero-part-1-line-charts/

## 6. Over-zooming.

Whenever you are zooming into a section of the graph, avoid resizing the axis range, as this might give the wrong impression of the magnitude of change. In general, this type of “over-zooming” is one of the most common pitfalls of interactive graphs. It “makes a mountain out of a molehill” by over-emphasizing random noise as giant variations. Ask lots of questions about the user goals, and be mindful of this aspect as you select Min and Max values for your graph. Sometimes it pays to take into account a longer time frame, like seven days or 30 days, to avoid over-zooming and amplifying random noise or seasonality (more on this in Part 3 of 3!), thereby making any patterns in the data harder to see and understand:

## 7. Too many charts on the same graph.

A typical 2-D graph can have exactly two axes and can comfortably accommodate 5-10 charts. Hence it’s quite common, for example, to plot the temperature line graphs for “Top 10” as line graphs, like the “Top 10 Highest Temperature Cities” and the like.

For most displays, anything more than 10 graphs will need a great deal of custom interaction design to become legible, will take a long time to render, and will generally be difficult to create and use… Unfortunately, this type of overloaded display design with overlapping lines in a rainbow of colors is quite common in today’s systems:

One useful alternative suggested in ClearlyAndSimply is to make the line chart interactive, as shown below:

You can read more about it here: https://www.clearlyandsimply.com/clearly_and_simply/2020/06/how-to-handle-line-charts-with-many-data-series.html

However, I would really question if such a display is needed in the first place. So consider the problem carefully – do you really need to plot all these lines on the same graph? What exactly is the purpose of such a display? Can you stagger the lines, use area graphs to display them? Maybe summarize the information?

## 8. Choosing the incorrect aggregate value or level of smoothing/compression.

When tracking stocks, what daily should you track? Min, Max, Average? When finance professionals do this, they track the close (end of business) price as they find it to be the most significant. Whenever you aggregate any variable that changes day to day, watch your resolution: how many points are in the graph? For a stock price graph for one year, you will need about 250-260 data points to show all the closing prices for the stock market’s working days. A 5-year graph would need 1,250 data points, which is still very doable for a single stock. We can check via mouse over that the display below is really plotting all 1,250 data points as daily values:

But what if you wanted to show multiple stock charts together to examine trends? Choosing your aggregate value and level of compression is especially important for rendering multiple graphs on the same screen. For example, when rendering sparklines, there are very likely to be multiple graphs on the same screen. (We discussed sparklines in the last installment: https://www.uxforai.com/p/line-chart-definitive-guide-part-1) Depending on your data retrieval algorithm, you only have 20, 100, or even 250 data points for each sparkline graph, and if you have 100 sparklines displayed, that’s about 100*100 = 10,000 data points. This can be expensive, slow, or even altogether impractical to retrieve, meaning your design may not even be feasible.

For example, the chart below implies that it can plot all of US stocks using not just one but **three **beautiful sparklines. That looks like about 7+7+ 20 or 34 datapoints per row or 34*3600 = 122,400 datapoints. Doable? Ehhh… Maybe. If you got only a few people hitting the system. And don’t mind waiting for the page to load. And don’t care about AWS processing and retrieval fees. But would you really need this kind of display? Can you reasonably compare 3600 lines of graphs? Or would you really choose to track a certain smaller number of rows, like 100 stocks (also not small with 100*34=3,400 datapoints)?

## In Closing…

Keep in mind that whatever algorithm you choose, often daily, hourly, or minute-by-minute fluctuations (maximums and minimums) may be routinely suppressed in favor of showing a single aggregate data point per day. What value should you show, and what level of compression should you choose? **All that will depend on your use case. **When choosing your level of compression and what value to display, interview your developers, business stakeholders, and users. Ask pointed, specific questions; document your answers using your mockups and confirm your understanding with your customers. Then you can be sure your line graphs are going to really shine!

Happy charting!

**Greg Nudelman & Daria Kempka**

## Join the conversation