Machine learning is a “field of study that gives computers the ability to learn without being explicitly programmed". Traditionally, computer programs are made up of a number of actions and commands that have been explicitly coded into them to allow them to solve a given problem. By contrast, with machine learning, instead of writing a rigid set of instructions based on features that we believe or know to be important, we let a program analyse the data based on the targets we want to achieve and let it recognise what signals are important and how to react to them – instead of writing programs that solve problems, we write programs that learn to solve problems. An example of this in action is below. Here, the only input was the game pixels, no one told the program how to play. The aim was to maximise the score and the only thing the computer was able to control was whether to move left or right but, using the machine learning algorithm Deep Q-Learning, the program was able to learn from, and act on previous experiences, until it was (quite quickly) able to play at a superhuman level.
Another example is ‘AlphaGo’, the first computer program to ever defeat a human professional player of the game Go. Due to the complexities of this game (the number of possible configurations on the board is more than the number of atoms in the universe) and its intuitive nature, this has long been viewed as one of the greatest challenges for artificial intelligence. This was not expected to be possible for at least another decade, however this paper was published in January this year.
Machine learning works using artificial neural networks, in a similar way to the biological neural networks in our brain. Each neural network is capable of learning to do a certain simple task, but multiple (or deeper) neural networks can join forces to work out more complex problems. It’s worth mentioning that this is not new technology, it’s been around since the mid 1900’s, however considerable progress has been made in recent years due to advances in computer processing power and data capability and availability. Google were previously limited to 5-6 of these layers, however they are now able to work with 25-30, dramatically increasing the complexity of the problems and processes that they are able to compute.
Well this all sounds great, but why is it important for me?
Machine learning systems are already being used in a lot of the software we use on a regular basis, including speech recognition, Google translate, face recognition, predictive text, Amazon product recommendations, LinkedIn’s “People you may know” and Facebook’s newsfeed. It is also being used to drive advances in science, medicine and other technology, including self-driving cars. Furthermore, since October, Google have been using a machine learning system called ‘RankBrain’ to help rank organic results from “a very large fraction” of searches. They are still using other signals, but it is now the 3rd most important of the hundreds that they use. Crucially, not only is machine learning being used, but it has outperformed the existing process in almost every situation that it has been applied to.
Unsurprisingly, machine learning systems are also being used in paid search. Baidu have apparently been using it to rank ads and have “seen a notable increase in revenue as a result”, and this technology is also available in AdWords automated bidding and in DoubleClick Search Bid Strategies. At Periscopix we are particularly excited about the latter, as they allow us to combine the detail-orientated approach that we pride ourselves on, with the ability to optimise to levels that (as humans) we were previously unable to reach. I’ll be back shortly with another blog with more information on these, and some results that we have seen so far (unless a robot writes a better one first).