The Forecast management report gives you insights into the performance of different forecast models on your historical data and is designed to help you select the best model for your business. This article will walk you through how to set up the report, how to add models, how to interpret the different sections of the report, as well as frequently asked questions.
Navigating to the forecast management report
You can navigate to the report by clicking on Forecasts > Management. Access to this report is restricted by role (team lead, manager and admin).
Setting up the report
The Forecast management report can be used to select the best forecast model for your needs. The first step in this process is selecting a configuration that makes sense in the context of your support org.
Filters
- You must first select a Channel at the minimum and then can filter by Queue.
- Date range: the forecast range you’re interested in looking at. Clicking on the date range will open up a date module where you can select ranges from a preset list of ranges as well as custom ranges. Once you've selected your range click
Apply
. - Interval: the granularity of your forecast. For example, if you schedule your agents staff on an hourly basis you should select a 1-hour interval, but if your agents are scheduled for 1-day shifts without smaller events, it makes sense to evaluate forecasts over 1-day intervals.
- Forecast lock preview: this allows you to set a “lock” on your forecast. For example, if you create all of your schedules 1 month in advance, you should use a forecast lock preview of 30 days to see how accurate the forecast models would have been 30 days out, whereas if you staff the day before, you should use 0 or 1 days.
- Forecast on: this dropdown will allow you to choose the metric type you’d like to forecast on. These include new contacts, reopened contacts, or productivity.
Example: If you’re interested in finding the best forecast model for your “Email” Channel for “Queue A” and you build schedules 14 days out for 15 minute intervals, you’ll select the following filters:
- Channel: Email
- Queue: Queue A
- Forecast on: New cases
- Forecast lock preview: 14 days
- Date range: {The date range you wish to use to judge accuracy}
- Interval: 15 minutes
Accuracy Comparison table
This table will allow you to compare existing forecast models based on your Channel and Queue selected.
The Most accurate tag will provide guidance based on the model with the lowest mean absolute error. In some cases, you might want to err on the safe side and choose the model with the lowest under error, and ultimately you should select the model that is the most suitable for your org.
This table will also show you if you have any outliers for this period and will link you to the Forecasts>Configuration page to edit or add them.
Adding models
You can add existing models to the comparison table by clicking
Select a model for comparison
and selecting your model of choice. This will only show you existing models for your Channel/Queue you are currently filtering on.Note: More models are available or configurable for your organization if you reach out to our Support Team.
Removing, deleting or promoting models
You can remove a model from table, or delete or promote it by clicking on the 3 dots and using the dropdown menu to the right of the model’s name
Remove comparison: This simply remove the model from the comparison table, it does not delete the model
Delete model: This will delete the model for your organization
Set as current model: This option is only available if the model isn’t already the model currently used in production for the selected Channel and Queue. Note that models are configured by Channel and role. So if you wish to use the same model for 2 different Queues in the same Channel, you will have to set that as the current model for both Queues separately.
New cases forecasts and actuals
You can inspect more detailed forecast values using the graph at the bottom of the report.
The forecast models and date range in this graph will be based on the models selected in the the Accuracy comparison table and the Date range filter
- Click on
Export forecast values
to export the values of selected models in a CSV.
Metric definitions
- Mean absolute error: the average absolute difference between the forecast and the actual for each point. Calculated as:
- Mean absolute percentage error : average percentage difference between the forecast and the actual for each point. Calculated as:
Note: when the actual value is 0, this results in an invalid evaluation due to division by zero. This is especially common at smaller time intervals and in low-volume queues.
- Over error: mean absolute error but only on intervals where the forecast was above the actual.
- Under error: mean absolute error but only including intervals where the forecast was below the actual.
Frequently Asked Questions (FAQ)
- Q: Why does the historical model only appear when my forecast lock preview is set to 0?
- A: The historical model is special in that it is a snapshot of what your forecasts were in the staffing timeline at a given point in time. As such, they are essentially real-time as they could have been generated at any time before they happened and we cannot artificially recreate these values with different lock previews. This model can be used to compare what your forecasts actually were to what they could have been if you had been using a different model at that time. The lock period is configurable if you reach out to our Support team . If the lock period is 7 days for example, the historical accuracy will be available when the Lock preview is set to 7.
- Q: Why do some of the values in the table show the infinity symbol?
- A: If there are no actuals that came in for a certain interval in the selected time range, the calculation may involve a division by 0. In this case we recommend either selecting a different date range during which there are actuals for all intervals or simply falling back to the average contact error rather than the error rate.
- Q: How do the numbers in this module differ from the Forecasted vs actual report?
- A: The historical forecast values will be the same in Forecast management and Forecasted vs actual. The key difference is that the Forecasted vs actual only shows you what was actually forecasted and what’s forecasted using the current model with more details on specific time intervals and some additional metrics.
Comments
0 comments
Please sign in to leave a comment.