The Definitive Guide to Tableau’s Analytics Pane

Benchmarking, Modeling, Forecasting, and More

In this live webinar on the Analytics pane in Tableau, join Ethan as he does a technical deep dive into every aspect of the Analytics pane. Tune in to learn how to enhance your data visualizations with advanced analytics by simply dragging and dropping them onto the view!

Hi. This is Ethan Lang with Playfair+. And in today’s video, I’ll be covering the definitive guide to the Analytics Pane in Tableau. So let’s go ahead and just jump into Tableau.

I’ll be starting by just kicking us off with just this brief overview. So typically, when we implement something in a tool, we might land with a data visualization that looks like this, the tried and true line chart, which is a great visualization to look at historical data and trends over time.

But how can we build in more value and more context for our stakeholders that are going to be analyzing this data or using this dashboard, by just adding in some simple context for them? A great way to do this is through the Analytics Pane.

Now if you’re unfamiliar with the Analytics Pane, this might look familiar to everyone. We have our Data pane here, our data sources across the top. We have our measures and dimensions, parameters. Maybe we have some sets in here, and so on. This is probably very familiar.

But another view that might not be as familiar to some of us is the Analytics Pane. So here at the top left of our authoring interface, we can see the word Analytics here. And we can actually click that and toggle to the Analytics Pane in Tableau.

Now you can see, it’s kind of broken up into three distinct sections here. We have our Summarize section. This is where we can start adding in some reference lines, doing some analysis with box plots, and so on.

Then we have our Modeling section. And this is where we’re going to come into some really powerful advanced analytics and statistical models that we can bring directly to our view, very simply, by just dragging and dropping in some of these options onto our canvas.

Then we have our Custom. And this is probably my favorite section of the 3 here in the Analytics Pane. Because it gives us the ability to really dive in and build out a custom view and a custom analysis, using some models and calculated fields that we can bring into Tableau and build out some of these very advanced calculations. that I’ll preview with you at the end of today’s webinar.

So to get started, I’m just going to cover each of these one by one, and show you some best practices, some use cases for how to incorporate these within our view. But really, the ultimate goal here is to add in some more advanced analysis to give your users context, so they can approach their data analysis and make decisions faster and with more information.

So we’ll start here with just this constant line. And this is a very simple one you can drag into the view. It’s in the Summarize section. And if you drag it into the view, you’ll see, we’re presented with this option it kind of the top left of our canvas. We can add it to the table, which will add that reference line to both the y and x-axes. But you can also see, we have distinct pills beneath that, where we can add it to distinct axes if we wanted.

For now, I’m just going to drop this on to SUM of Sales. And you can see, we’re presented with kind of this new section up here, where we can type in a constant value. So constant line, a constant value, really is just going to draw a line across whatever value we describe here within this text box. So as a demonstration, I’m just going to type in 60,000 here and hit Enter. And you can see, it’s going to add that new constant line on our view.

You can see previously, I had already kind of pre-generated one here at 75,000. And again, this is just a great way to add in some more context. Let’s say we got that ad hoc request from one of our stakeholders, and they wanted us to pull all the months that were over 75,000. Now, not only can you send over the data with that information, but you can also send over a very clean visualization to support that data.

We can also start setting benchmarks this way. So let’s say the business has kind of identified a sales goal of $75,000 per month. Now we can incorporate that benchmark directly here within our view, and we can start tracking against that and seeing where we fell beneath and below that view, or excuse me, that constant value.

We can even apply some conditional formatting within our view to change the color of some of these marks on the view, to signify if it’s a below or above that value.

Now that’s the constant line. And that one’s pretty self-explanatory. This next one is the average line. So you can see, if I drag this into the view, we’re presented with these options here across the top left of our canvas. And this is the aggregation that average is going to get calculated at.

So it’s going to calculate across the entire table and show us a table average. If we had separate panes, it would be able to calculate that average across each pane, as well as individual cells. So if we want to calculate the average of every mark in the view, it’d be redundant for this particular example. But we could do it.

So I’ve added the average line to the table. And you can see here, it’s calculated out our average at 47,858. And again, this is just another great way that you can very easily incorporate the summary statistic in here. And you can see the average of all the data within your view. You can also start doing some conditional formatting if you wanted to, four things above and below that average. I’ll show you an example of that here soon. And we can start making some assumptions around our data as well.

Now the next one is a little bit more advanced. It’s our median with quartiles. So we can see here, if I add that to the view, we’re presented with those same granularity or level of details we want that calculated at as the average line was. But this is our median with quartiles.

And you can see, it draws our median line here, which was calculated out at 39,803. And we’re also given this distribution band, where we have our lower quartile and our upper quartile. And really, this is drawing that. And it’s going to give us the ability to look at all that data that falls within that band. And we can start making some more assumptions, utilizing this very simple technique, to add that more context for our end users.

Now the next one is our box plot. And this is where we start getting, again, just a little bit more advanced. We started with a constant line with a constant value, went to counting calculating out an average. Now we’re calculating medians with quartiles. And now we’re at the box plot.

A box plot is a great way to start understanding your data little bit more, and diving deeper and starting asking yourself questions about the data. It’s also going to be able to reveal very quickly if we have outliers within our data, or any problems that we need to address before we start getting into some of the advanced modeling that we’re going to tackle on here in a minute.

So if we look at our box plot view here, you can see, I’ve created this using region and the sum of sales by a subcategory on the detail. So what this does is, it draws a median line for each of those regions. It’s going to show us an upper and lower hinge. And everything within this box, if you will, which is the box with a box and whisker, everything within this box is 50% of the data for that particular region. So within this box is 50% of the sum of sales by subcategory.

And then if we look at these outer hinges here, this is our upper and lower whisker. And you can see, anything that falls outside of that would actually be considered an outlier. So very quickly, we can start making some quick assumptions about our data here. We can see east and west sales. And those regions are a lot more volatile. They have a wider range across that whisker.

So you can see that represented to the data, especially in comparison to the south and central region, which seem to have a little bit less volatility within the sum of sales there. And then you can see, of all the regions, we have kind of this one outlier here. So maybe I need to analyze that data point a little bit closer.

So I can simply right-click on that data point. I can view the data. I can view all of the data, and try to start making an assumption or trying to figure out exactly what’s going on there. Maybe we have an issue with our ETL process that’s causing those outliers. Maybe there was just a really big sale that took place for that subcategory and shares in that particular region. But we can start asking ourselves that question and diving in deeper.

Another cool trick I like to do with box plots is make it a jittered box plot. And there’s a very simple technique. If you right click anywhere into the column shelf here and double click, it’s going to open up a blank pill. And this is called working in the flow. And there’s a hidden function within Tableau. You won’t find this in Tableau’s data dictionary if you are creating a calculated field. But it’s called RANDOM. And it’s going to assign a random value across the columns.

And if I hit Enter, you’ll see that it’s going to jitter all of our dots within the view. I can increase the size of those so you can see it a little better. But now, you can see, I can distinctly hover over each dot now, whereas before, we might had some that fell on top of one another.

So this is just a quick way, again, to add some value to your view. So it allows the user to hover over and see those subcategories underneath your box plot. So just a really quick and easy thing that you can do for box plots, to add a little bit more value.

Now, the next part, and probably the easiest give me within the summarized section, is this totals. There’s many ways that we can implement totals in a view. And primarily, we would implement this on a crosstab.

So if you’re working with a finance group or some accountants, they love to see their numbers and a crosstab format. They also love to see sub-totals and totals across different levels of detail.

So if you add in this totals view, you can see, I’m presented with this in the top left of our canvas. And it’s saying I can add column grand totals. I can add row grand totals, subtotals, depending on the level of detail of our data, as well as how it’s pivoted.

So you can see here, if I drag this on the right now, it’s showing that I can add it to the grand totals section. If I pivot this data, however, I can add that same distinct thing. Bring it over here, and now you’ll see that we’re presented with only row grand totals as our option.

And again, total’s is pretty self-explanatory, and there’s several different ways that you can add that to the view. I find that using the Analytics Pane versus trying to dive in through your table layout and adding it here, adding it to the Analytics Pane is just a little bit easier. It also allows us a little bit more control of what we’re adding and where– so subtotals versus grand totals, and so on.

Now that covers the Summarize section. And I know all that kind of seemed a little self-explanatory, probably something that we’re all pretty used to. And now I’m going to start talking about the Modeling section. This is where we can start doing some very advanced calculations and some very advanced assumptions and modeling within our data, right here natively within Tableau.

So Tableau has some very advanced statistical models that you can bring in, by simply dragging it onto the view and dropping it in within our canvas. So let’s cover some of those right now.

So if we look at the next one here, this is our average with 95% confidence interval. So you can see, just like our Summarize section, it’s going to draw that average line. But now it’s giving us a 95% confidence interval and showing us the upper and lower bounds of that average. This is a great way to start understanding your data and the distribution of your data.

So if I’m looking at this, and I’m trying to figure out exactly what this is trying to tell me, I know just by this confidence interval that if I were to continue getting data, all else the same, continue getting data and continue getting sales, month over month, I know that the average is going to fall within this distribution 95% of the time.

If I were to resample this data– if this was not sales data, but something else, if I resample a population– again, that average line would fall within the spans 95% of the time.

The next thing that we have here is a 95% confidence interval around the median. Again, this is a great way to start understanding the distribution of your data. You can see, it has our median line here. We have our upper and lower bounds around that 95% confidence interval for the median.

And just like before, with the average line, if we were to resample this population of data, the median would fall within that band 95% of the time.

Now this one might seem a little bit out there, as far as an analysis. Maybe your end users might not understand this. But I can tell you when I approach brand new data, this is the way that I would start looking at the data and understanding it. And a great way to do that is to combine both the average with 95% confidence interval and the median with 95% confidence interval into a single view. So that’s what I’ve done here in our next section.

You can see I’ve added both of those models onto the view at the same time. and this is where we can really start making some very high level powerful assumptions about our data and the distribution of the data specifically.

So you can see here, I have our median. I have our average. And you can see that those lines are kind of deviating from one another. If our data was perfectly distributed, the average and median would be equal to one another. You can see here that that’s not the case, which is what we’ll find most of the time in the real world.

But what I would really like to draw your attention to is the bands themselves. So you can see, my average and my median 95% confidence interval, they actually intersect here, which is this darker color banding.

This is actually a good sign for me. We might look at this data and consider, maybe we have an outlier out here that we can take a look at and consider normalizing, or maybe we need to focus in and figure out why that’s looking like that.

But just doing the simple analysis, I can see right off the bat that holistically, throughout the data, I probably don’t have that many outliers that I really need to focus in on. This data looks pretty clean, just looking at this high level.

And we can tell that by this intercept. If, for instance, my average with 95% confidence interval was up here, my banding was up here, and my banding for the median did not intercept, I probably have some very skewed data or some outliers in there that I really need to dive in and look at.

So that might be a little bit of cause of alarm for me, coming in the new data. So again, this is a very simple technique that you can apply to really start understanding your data and making some very high level assumptions about the data, before we start getting into some more advanced statistics and statistical analysis.

The next thing I’ll cover is our trend line. And I love how they’ve worded this here. This is in our modeling section. They call it the trend line. And I feel as if that kind of sells it short.

For the sake of an easy example, I’m going to add this trend line to linear. And you can see, when I drag trend line onto the view, we’re presented with these options. And like I said, I mean, the verbiage trend line, I feel is selling it short.

These are actually very powerful statistical analysis that you can incorporate very easily. And I’ll show you some of that underneath the hood here.

But if I add a linear trend line, you can see, it draws this kind of dotted line across the view. And if we just look at this visually, this makes a lot of sense. Before we even added this trend line, I could just look at this line, this line chart, and tell that over time, we’ve started to kind of trend up in sales.

But this linear trend line that we’ve added to the view, this model, is actually kind of proving that statistically. And we can tell by just hovering over. We’re presented with some very basic summary statistics here.

So in our tooltip, we can see we have a calculation. And this is actually a regression model and the regression calculation, right here, with our SUM of Sales and Month of Order Date. And it’s essentially saying that as our Month of Order Date increases over time, so is our sales. And that’s why this linear line has this upward movement from left to right.

We’re also presented with some very basic summary statistics, like our r squared and our p-value. Our R-squared is essentially just saying, it’s a number from 0 to 1. And it’s essentially expressing how much of this data is explained, or how much of the model is explained right here with our data. So we can see, it’s at 0.25, or 25% of our sales are explained through the month of our order date. And everything else, we kind of have to add in or make some assumptions about.

And then we have our p-value. Traditionally, our p-value, we’re looking at a 0.05. And we would call that statistically significant. But really, that’s just an arbitrary number that statisticians have kind of put out there. Depending on your analysis and what you’re trying to accomplish with your model, you may want to increase that p-value or decrease it, depending on the leeway that we can kind of make within our model.

So for instance, for sales, maybe we don’t need to be that precise. Maybe we can actually go a little less. But if we’re looking at health care data, for instance, where a decision we make on this model could be the life or death situation, maybe we actually need to increase the amount that we’re analyzing in our hypothesis here, and start taking a little bit more caution around this p-value.

But for the sake of time, let’s just say that we’re looking at p-value of 0.05. And we’re going to call anything less than that statistically significant. We can see here that this model is indeed statistically significant, and that our trend line is our sales– excuse me, our sales are moving up over time. And that’s the truth, based on statistics there.

Now, we also saw that we had different regression models in here. Let me explain some of those. So the linear line is a very simple approach. But what I love about Tableau is they’ve made this very easy, even for non-statisticians, to look at across these visually and make a decision on which model would best fit your data or best represent the data.

You can see a linear, if our data kind of had that left to right, moving from bottom left to top right, very linear. We could add that, and it’s probably going to fit the data. The best but looking at these options, this polynomial model is what stood out to me.

We can see, I do have some seasonality within my data here for our sales. And this polynomial model will allow this trend line to bend and follow some of those seasonality patterns. So I’m going to drop trend line onto that. And we can see, again, just very visually, that this trend line I would say better represents the data than that. linear line did.

We have this period where we saw an increase. And then it almost stayed the same across this time period here. Sales didn’t really start picking up until right about this moment here. And we can see that it’s starting to trend up very quickly after that.

Again, if we hover over our trend line, we can see, we have that modeling there, the calculation. We have our r squared value, and we have our p-value. And we could see, our r squared is slightly higher than what we saw with the linear line. And I think that makes a lot of sense, just based on the way this looks.

And then we have our p-value, which is still showing us that it’s statistically significant, which is a great sign.

One last thing to cover here with our trend line Models if you right click on this, you can go to describe the trend Model and for those of us that are a little bit more statistical SAVVY this is going to give us the entire summary statistics Here

So we can see, we have all of our coefficients, our r squared. We can start looking at the degrees of freedom. and so on, and start making a little bit more diving into the data little bit more to make sure that everything is good. Or maybe we even want to bring in some of these coefficients and try to recreate that model using calculated fields, and build this line out ourselves. So that way, we can start predicting further out.

Now the next thing I’ll cover here is forecasting. So I’ll switch it over here. And you could see forecasting. It does just that. It’s going to forecast forward a set amount of periods in our data. And we can start looking outward into the future.

Now just like the trend line, if I drag this in, you can see, I’m presented with this option. Underneath the hood, Tableau is currently using a model called Exponential Smoothing for their forecasting model. It’s a pretty advanced model underneath the hood there. And really, it’s very similar to a rolling average, except it’s taking the most recent periods within our data, and it’s weighting them a little bit more heavily than it would some of our periods that we saw further back.

So as we move through, forward into the future, those values that are most recent, again, are weighted more heavily. So we get a little bit more accurate estimate in our data.

So just like our trend line, I can see here, if I right-click on any of this and I hover over Forecast, I’m presented with these options. So I can actually change some of the forecasting options here by selecting this value. And it’s going to bring up this menu. And this allows us to start playing around with our forecast.

So I can change the amount of time I want to forecast. So let’s say I only want to forecast maybe six months into the future. I can simply change that to six rather than 12. And you can see, my forecast adjusts here.

Sometimes, we don’t want to really overwhelm our users. And obviously, as we know, if we’re forecasting further out, it’s going to have a little bit more volatility with the accuracy of that forecast. So again, we don’t want to overpromise, especially with some of these assumptions we’re making.

So you can adjust those forecasting options before you publish this. You can change a little bit of the sourcing and aggregation if you wanted to, and some of the modeling here. But you can also change the prediction intervals.

So again, like I said earlier, depending on your statistical knowledge and what you’re trying to analyze here, you may want to be less or more accurate with your predictions. So you can see here, I can drop this to 90%. I can increase it to 99%, which is going to widen out that prediction interval. It’s going to say, this value that we’re predicting is going to fall somewhere in between this, with a 99% accuracy.

We can go to 95% or drop it to 90% to kind of lighten that interval. But you really want to make sure you’re communicating those assumptions with your stakeholders, especially if we’re making sales decisions or financial decisions based on these forecasts.

Another thing I absolutely love that Tableau does here is, it makes this very easy for even folks that are very not statistical savvy to communicate how this is being calculated so you’re end stakeholders can understand it. And we can see here that within this menu, we actually get this nice dialog box here, where it describes this model in plain English.

So you can literally copy and paste this out of that window, put it into your analysis, and that can be your foundation that you can use to set up exactly what’s going on here and explain it to your stakeholders so they understand.

The next one I’ll cover is our clustering model. And our clustering model, if I add this to the view, we’re presented with the same option that we were with the forecast. Now underneath the hood here, Tableau is using a model called k-means as their clustering model. That is looking at an average, and then drawing kind of clustering everything closest to that average.

Now just like with our forecasts and our trend line, I can right-click on some of the stuff and start describing the data. I can change it, some of the variables within the data. So let’s preview that.

Jumping back into Tableau, I can right click on Clusters here on our Marks shelf. And I have two new options here in our menu– Describe Clusters and Edit Cluster. If I edit cluster, we’re presented with this little menu, where we can change the variables that we’re looking at that are defining this cluster. We can define how many clusters we want to have within our data.

And then over here, if I go to Describe Cluster, again, for those of us that are a little bit more statistical savvy, we can go in here and start looking at some of those summary statistics and coming up with how well our model is performing and making some changes to it, and kind of tweaking it as we go.

And you can see, we have a couple of tabs here. Summary, we can look at our models, and view our F-stat, our p-values, and so on from here.

Now let’s go back to that edit cluster for a moment. I do want to talk through this a little bit and share some best practices or techniques. A lot of times, I’m presented with questions like, how many clusters should I have within my analysis? What makes the most sense?

And my answer to that is always, especially within Tableau, it’s so easy to just play around with it and see if it makes sense to you. So what I mean by that– let’s say I had eight clusters in here. So we can see, I have these distinct groupings as we move from the bottom up to the top.

And if I use my legend over here, I can actually highlight those. So I’m looking at each of these clusters. And I can see, they kind of have a lot of different marks within each cluster, which is great.

But once I get up to 5, I can see I only have maybe four marks here within that cluster. If I click on 6, only one mark on the view is being highlighted. 7, again, only one mark on the view. And then 8, again, just one mark on the view.

Now this might be a great way to look and see and analyze if you have any outliers. I do think visually, that might be a good way to do it. There’s probably better ways to look for outliers within your data.

But what this is telling me is, I’ve probably overtrained this. I have these clusters up here that we’re trying to define and make decisions on, with only one mark in it. I don’t think that’s a very good use of our time, or maybe our analysis. So what that’s telling me is I need to kind of back off from the amount of clusters that I have in the view.

So let’s just take this down to 5 and see what that looks like. So you see, I have 1, 2, 3, 4, 5. This one only has 2 marks in it. Maybe that’s still a little too much. Let me bump it down to 4 and see what that looks like.

Now that’s starting to look a little better. We can see, we kind of have our distinct grouping of this cluster 3, and then this distinct grouping of cluster 4 up here. And that actually makes sense to me. You can see, almost visually, if you have imagination here, just drawing a line between those two groups. We can clearly see that there’s something that’s causing a difference there.

And maybe we can start making some assumptions and taking these groups of customers and coming up with different ways to market to them. If this was products, maybe we’re analyzing it and coming up with different ways we can price our products as a group. Or maybe we want to implement something where we’re adjusting the price for products as a group, rather than trying to go through and adjust the price of thousands of products at a time. This is a good way to of gauge that.

So this clustering model is a great way to get a sense for some of that stuff. And again, begin implementing and making some assumptions about your data.

Now I’m going to move on to my favorite, which is this custom section. And I say this is my favorite one, because it really gives you the ability to implement any kind of statistical technique you want to and apply that technique visually within your view. So let me show you exactly what I mean by that.

You can see here, I’ve reference line, reference band, distribution band. And again, we’re presented with box plot here. For now, let’s just focus on reference line. And you can see, if I try to drag this into our canvas, I’m presented with this little box up here.

I can change the– or I can add that reference line across different levels of the view. I can also, again, independently add it to the specific level and/or axes on the view. So you can see, I have a lot of flexibility and freedom to do whatever I want.

For now, I’m just going to add this to sum of sales across my table. And you can see, I’m presented with this dialog box. Here, I can see that it’s a reference line, which is denoted up here at the top. It’s going across the entire table, which was those scopes or level of detail we want that calculated at.

And you can see that it’s giving me some options here to change what I want for the reference line. It’s looking at sum of sales. I can change that from average, to maybe median. And you can see, that’s taken effect in my view live. I can change it to the minimum. It’s going to draw that reference line around the minimum, the maximum, average, and so on.

I also have the ability to start formatting that line. So I can change the color, the thickness, the opacity of it, and all of those options directly from here.

Now one thing that I love about this, again, is that it allows us the ability to go in here, and on the fly, make changes. So you can see, I can automatically switch the band. I can switch back to line. I can switch the distribution. And each of these presents their own unique analysis and their own unique way of implementing something new for your end user.

So if we look at band, I can see, I can draw a line or create a band from and a band to specific values within my view. So again here, we’re looking at sum of sales. Let’s say I want to draw a band around the average to the minimum. And you can see, I can very easily add that into my view for context here.

Let’s say I wanted to change that to median. Again, it’s very simple to add these bands and change these settings. And again, at the bottom here, we’re presented with some formatting.

Now the line and band, I think, both represent very unique things that you can implement within your view that’s going to provide, again, that more context to your end user. But I find that the distribution presents us with some of the most powerful analyses that we can implement.

So you can see here, if I select distribution, it’s presenting us with these options and showing us how we want that distribution band calculated. Currently, it’s based on these percentages of the average, so 60% to 80% of the average. I can change that to percentiles, quartiles, as well as standard deviation.

Standard deviation is where we can start making some really powerful assumptions. So you can see, I’ll just add that to our view here. And for now, I’m just going to close this menu for a second, focusing on this. This is drawing a standard deviation and building out a distribution band, based on plus or minus 1 standard deviation of the average.

From this, we can start making some clear identifications as what would stand out as outliers, what would stand out as anomalies, if you will– not necessarily outliers. Maybe it’s a poor choice of words. But anomalies, I would say, is fair game.

That means that these values that fall outside of that range aren’t going to happen very often. This actually makes up quite a bit of the data within our view. So anything that falls outside of that banding would be considered anomalous.

Now what are some powerful things you can do with something like this? A great use case for this– and I built this view. I’ll preview it with you all– is you can start, again, making some assumptions around if something’s a good, if it’s something’s expected, maybe it’s a bad anomaly, a good anomaly, in terms of profit. We can see anything that kind of falls down below here I’ve identified with some conditional formatting as a bad anomaly. Anything that falls above some of these standard deviations I’ve considered as a good anomaly.

And you can see here, I’ve labeled these with some clear labels for our end users to understand. So you can see within this band, this is going to make up about 68% of our data, roughly. And you can see visually, that makes sense. I’d say about 68% of our marks are falling within this banding here in the center.

Now on the other side of this, we have this distribution band that makes up 95% of our data. So anything that falls within this, again, makes sense. We can see, I only have maybe four marks on this entire view of about 100 marks. Actually, that’s a really good example. So about 100 marks. I only have four marks that kind of fall outside of that banding. So you can see, that’s about 95% of the data.

And again, I’ve added some conditional formatting here based on the standard deviation, which is a function within Tableau. So you can actually use your custom field or your custom section within your analytics pane, create calculated fields to drag in into the view, and you can create some of these very custom calculations and statistical models visually, using calculated fields in your analytics pane reference bands.

Just to kind of show you a little bit of art of the possible, I have built out this kind of supplemental workbook here. And this is something that you can go to our Tableau Public. You can download it today, and hopefully re-engineer and incorporate within your own work today, very simply. And these are very powerful techniques.

So one is using standard deviations to identify anomalies within our data. I’ve also created a view that’s looking at the median with quartiles. I’ll preview that. And this is essentially a box plot, but I would say visually, a little bit better presented.

Just to show you exactly what I mean, if I take the same view, and I added a box plot to it, you can see, we would be presented with something like that. It’s very, very unappealing to our end users. It actually muddies up the view, and I’d say presents us with more questions than it answers.

But if we use custom calculated fields and our custom reference bands, we can actually implement an analysis like that very easily using those bands and calculations, and present it in a lot more visually appealing way to our end users that would make more sense to them.

And the last one I’ll preview with us is called a z-score test. And this, again, is using that custom reference banding and calculated fields to build out this custom view that identifies these anomalies.

So to preview that with you, I’ll just look at this and edit this calculated field. But essentially, this is the calculation that would calculate out a z-score of each mark on the view. And then based on the z-score that gets assigned to that view, I have, again, built a z-score formatting that’s going to apply that custom conditional formatting to each mark on the view, based on the value. So we can identify and visually show these different outliers or anomalies within our data.

Thank you for watching today’s video. This has been Ethan Lang with Playfair Plus.

Join Playfair+ Today

All members receive exclusive access to over 160+ videos and counting across eight learning paths.