Welcome to day 2 of Black Swan at Cannes Lions, where we are predicting the awards winners all week from Cabana 4, outside the Palais des Festivals on La Croisette. Today sees more awards announced. Have we got our predictions right?

Our predictions:

Today we have followed the same method as yesterday (more of that below!), but to test whether social data is improving accuracy we have chosen to share both Level 1 and Level 2 predictions.

Level 1 (using historic winners data) can be seen live in our Cannes Lions app here: http://tenthavenue.blackswan.com/canneslions/#/awards 

And our Level 2 predictions (including today’s social buzz) are presented below:

Our predictions for the Cannes Lions Media Award are:

Based on our analysis, our Predictions for the Cannes Lions Media Award are:

  • HIDDEN MESSAGES by Instituto Maria da Penha
  • #NOTBROKEN by Mondelez International
  • INTERCEPTION by Volvo

 

Based on our analysis, our Predictions for the Cannes Lions PR Award are:

  • RUNWAY HAIR, YOU CAN WEAR by TRESemmé
  • PEPPER HACKER by Dolmio
  • INTERCEPTION by Volvo

 

Based on our analysis, our Predictions for the Cannes Lions Outdoor Award are:

  • MOTHS by The City of Buenos Aires
  • SLEDGEHAMMER by Musée de la Grande Guerre du Pays de Meaux
  • +5 by Tok&Stok

 

Based on our analysis, our Predictions for the Cannes Lions Creative Effectiveness Award are:

  • THE AUTOCOMPLETE TRUTH by UN Women
  • THE WORLD’S TOUGHEST JOB by American Greetings
  • SPEAKING EXCHANGE by CNA

 

So yesterday, we didn’t get it right – but in the words of our Data Scientist Dick Fear,

“In some ways I’m glad the forecast was wrong, because now I get to talk about probability!”

We used a probabilistic model, but a probability is not the same thing as a prediction. We like to think in certainties, so if a model tells us that x will happen 70% of the time, we are liable to ‘predict’ that x will happen all of the time. But that is clearly not the case. In the words of Nate Silver ”

“If you forecast that a particular incumbent congressman will win his race 90 percent of the time, you’re also forecasting that he should lose it 10 percent of the time. The signature of a good forecast is that each of these probabilities turns out to be about right over the long run.”

People often forget about that 10%. Human beings as a rule aren’t very good at grasping probability- the classic example is gamblers fallacy. If you roll a dice ten times and get ten sixes in a row, the chance of rolling an eleventh six is no more or less likely than the first six you threw, which seems to go against everything you instinctually feel is true.

There are a lot of pitfalls to forecasting. For one thing, the past does not necessarily tell us the future, even with a large dataset. Say we were trying to forecast global warming and discovered a is a high correlation between rising temperatures and the dropping number of pirates worldwide since the 1800’s (this is actually true!). We could have all the data in the world on pirates and average temperatures and yet a naïve model of this dataset would still tell us that increasing the population of pirates will stop global warming! This is why ‘black box’ models are rarely effective. To build an effective model is to understand the relationships within the data, the paths of cause and effect, the reasons behind a given result. This does not, however, mean every forecast will be what you expect, as in Nate Silver’s example. What’s important is that you will have learned insights into the creation of the model, insights which can often be more valuable than the forecasts themselves.

In case you missed it yesterday, here is how we worked it out:

Our team of Data Scientists formulated
a Two Level Methodology:

Level 1:

  • We analysed historic nominees + winners data, noting any past successes which may indicate success in 2015
  • We created a statistically informed character profile for each nominee,then used a ‘discrete choice’ approach to infer the winners

Level 2:

  • We enriched the profiles with social data – we called this ‘Lion Watch’, then picked up ‘buzz’ on any nominees we may have considered ‘outliers’ in level 1.
  • Next, we monitored any significant increases in positive mentions, cross referenced with any level 1 inference
  • We predict!

Come back tomorrow for more – we’ll be predicting Design, Product Design, Radio and Cyber Grand Prix.

Follow Black Swan’s predictions, insights and other activity around Cannes Lions 2015 on our Twitter feed:@BlackSwanData

Lara is Communications Manager at Black Swan.