Christmas 2020 was on a Friday which meant that Boxing Day was on a Saturday. In the UK Boxing Day is a public holiday but because it fell on a weekend the following Monday (the 28th December) was a “substitute day” public holiday.
The database of country holidays used by Forecast Forge treats these two things as different holidays. i.e. in 2020 there was a Boxing Day holiday and an extra substitute day Boxing Day on the 28th.
The last year with a substitute Boxing Day was 2015 so unless the historical data used in the forecast includes Christmas 2015 then the Forecast Forge algorithm will not have any relevant training data to make a prediction for the holiday effect in 2020.
I tweeted this observation last week and Peter O’Neill replied with this:
Wait - are you saying that machine learning models will all just break on unexpected days?
— Peter O'Neill (@peter_oneill) December 28, 2020
Which I think is a very interesting question.
The algorithm is behaving exactly as it is designed to so in one (important!) respect it is not broken at all. But on the other hand if it isn’t doing what the user expects then even if it isn’t fair to describe the procedure as “broken” it is certainly far from ideal.
And this gets to the heart of why I think it is an interesting question; it is about how users interact with machine learning systems and what expectations are reasonable for such an interaction. And this is exactly the area where the user interface for Forecast Forge sits!
Let’s talk specifically about the substitute day bank holiday problem even though there must be about a million other similar issues.
Off the top of my head I can think of six different ways of modelling this:
It isn’t obvious to me that any one of these is always going to be better than all the others.
For a bricks and mortar store I can imagine that, in a non-Covid year, they would get a lot more footfall with a substitute day than in a normal year. For an online brand I can see it not making very much difference - particularly for Boxing Day which lies in the dead period between Christmas and New Year.
The good news is that Forecast Forge allows you to use any of these options by setting up appropriate helper columns. The bad news is that it doesn’t communicate at all about which of the models you are using with the default “country holidays” model.
It is a huge challenge for me to build a user interface that knows when a modelling choice has other options that the user might want to explore. People don’t use Forecast Forge because they want to have to think about this for every possible modelling decision; people like this need to code their own models. But, just as importantly, people do use Forecast Forge because they want a bit more control and customisation rather than a forecasting service where they get the result and can’t do anything about what it says even if it is obviously wrong.
And not knowing what modelling decisions have been made behind the scenes and what other options are possible makes this much harder than I would like for users.
As Peter says:
My argument has always been for a tool that can learn from past behaviour but allow for a human to adjust levers based on known upcoming events using a human estimated weighting
— Peter O'Neill (@peter_oneill) December 28, 2020
And this would be the holy grail for human/machine learning interaction. It already exists for those who are confident with both coding and converting whatever levers a business might want to pull into mathematics. But these people are a rare (and expensive!) breed; I wouldn’t class myself as top tier at either discipline.
I have made design decisions with Forecast Forge that do restrict the levers that are available for pulling. My hope is that I have left the most important levers and that by building it into a spreadsheet all my users have more flexibility in what they can do with it than in other tools.
But it is still short of what I wish it could be. Is there a way to hide the unnecessary complexity whilst making all the necessary complexity visible?