Learning deeper: a Tensorflow use case
Understanding what a new technology is and how it fits in your life used to be an easy task. The electronic spreadsheet, the iPod, fast printing, lowcost flights are advancements that are easy to grasp, easy to measure, and follow the ‘10x rule’. They are 10 times cheaper, or faster, or better than the previous solution — and that’s why their adoption happened at a breakneck pace.
But as our world has grown increasingly complex, it has gotten quite difficult for all of us to discern between hyped lemons and real gamechangers. And when it became harder to understand whether or not a product could be the ‘next big thing’, most companies actually decided not to choose at all.
The problem with this approach and a pretty big one at that is that standing still does not guarantee the conservation of the status quo. Much like the red queen in Lewis Carroll’s masterpiece, sometimes you need to run just to keep your place. And what has worked in the past might actually not be what you need now.
A contagious case of linear regression:
Case in point: linear regression. Invented more than a century ago, this tool became a staple of the analyst’s bag of tricks by being easy to implement, intuitive to interpret and (probably the biggest culprit) integrated as a oneclick solution in most analytics packages.
Enter deep learning. Despite being touted as the latest cuttingedge advancement, deep learning as such has been around for the last 40 years. We simply didn’t have enough data and enough computing power to really let it shine. With modern implementations such as Google’s Tensorflow (which we use at Datatonic for most of our commercial projects), scalable online infrastructure and continous advances in computing power, replacing our old tricks with new ones has never been this easy.
The principle behind this method is extremely simple: map an input x to the predicted output y by multiplying it by a coefficient Î±. Then compare your prediction to the actual outcome, update your coefficients and repeat until the difference between prediction and reality is minimized.
Linear regression can provide valid estimates and be a decent tool to get intuitive insights on midsized data, it fails miserably when confronted with more complex datasets; and it is generally outperformed by modern solutions that, believe it or not, are just as intuitive and easy to implement. In this case, we’ll use a neural network.
both z and w are nonlinearly transformed (in our case with a linear rectifier — or RELU)

A neural network can simply be explained as a regression of transformed regressions: instead of linearly mapping your input x with the prediction y, you first use it to detect features using socalled hidden layers: the outcome of those hidden layers (z and w in our figure) is fed back to the following layer until the final layer — the prediction — is reached.
Since an example is worth more than a thousand words, we’ll show you a stepbystep comparison of past and future tools along with some very readable Python code you can easily try and tinker with on your own. So let’s get started!
For this example we’ll be working with the UCI Bike Sharing database. This is a very popular reallife dataset showing the number of bikes shared in Porto, aggregated hourly over the span of two years, along with a number of convenient predictors, such as ‘time of day’, ‘humidity’, ‘temperature’ and more.
With a couple of simple plots we can see how our data distribution follows what one might expect from such a dataset: more bikes are shared during normal working hours, when the climate is milder and if there’s not that much wind.
A more indepth graphical analysis can be found in the complete code listing; but let’s cut to the chase and see how well our linear regression performs using the the scikitlearn package, which executes a standard OLS linear regression in just a few lines of code:
How did it do? Not very well: if we were to follow what our model prescribes, we’d end up with around 140 bikes too much or too little during the forecast period. And sure enough, our R² score is negative (a statistical quirk that happens when expressing R² as 1ESS/TSS), meaning that our model is not effective in predicting the number of bikes we’d need to fulfill our demand and would never be useful in an actual production scenario.
Faced with a similar problem, a motivated analytics team would probably start adding quadratic terms to the regression function to model polynomial relationships, or simply try different functions in their favorite software package until an acceptable solution is reached. We sure tried, and the best we could do was a Random Forest model with a R² score of 56%.
But what if there was a better way? A technology that’s an order of magnitude better than our current toolkit?
Ten times better
Enter deep learning. Despite being touted as the latest cuttingedge advancement, deep learning as such has been around for the last 40 years. We simply didn’t have enough data and enough computing power to really let it shine. With modern implementations such as Google’s Tensorflow (which we use at Datatonic for most of our commercial projects), scalable online infrastructure and continous advances in computing power, replacing our old tricks with new ones has never been this easy.
Case in point: our deep neural network model for the bike dataset. We explicitly decided to use tensorflow.learn (formerly SkFlow), to show how a stateofthe art deep regressor can be implemented with as many lines of code as our standard linear models. And here it is:
Doesn’t look that complicated, does it? And yet this model is able to predict our test set with 92% accuracy, reducing the Mean Squared Error by a factor 10. Best of all, and thanks to the amazing work of a number of very talented developers, this incredible increase in efficiency does not make the implementation much more complicated: on the contrary, with a model as simple as our standard OLS we are basically able to produce a productionready forecaster.
If you’re feeling dizzy after all those numbers, here’s a visual representation of the linear model’s predictions vs the actual values for a random subset of our test set:
And here’s its counterpart, this time for the DNN model’s predictions:
There are just as many values on this plot as on the first one — the relative emptiness is simply due to the DNN’s model predictive power (as smaller differences between predicted and actual values lead to much shorter blue segments).
In this particular case we decided to keep the mood light by predicting the number of bikes in sunny Portugal. But in today’s world (and especially if we consider at the kind of projects we usually take on at the office) it might as well have been something crucially important — cancer occurrences maybe, or streams of products sold, or highspeed financial data, or the number of people crossing the border and in need of assistance. When the stakes are this high, a combination of old and new techniques can make the difference between greatness and failure.
That’s why there’s simply no excuse for any modern team to stop innovating. The technology to be at the cutting edge of the analytics game is out there: it’s demonstrably order of magnitudes better than the incumbent solutions, and extremely easy to actually deploy it in a realworld environment.
If you want to see our code and play with it, we have bundled the entire codebase along with our visualization at this link, and the dataset at this link. Just install the needed dependencies and run it in in your editor of choice (we generally use Jupyter Notebook on a Google Compute Engine instance).
And should you have any questions (or if you want to hear firsthand how we use this kind of insights to solve real problems for our clients), don’t hesitate to contact us via our website, or at our European or British headquarters!
the Datatonic team
0 comments: