It’s the 21st of October 2015 and, today marks the day that intrepid adventurer Marty McFly travelled to the future to pretend to be his own son and save Hill Valley from the unconvincingly combed over Biff Tannen. With that tenuously in mind, now seems like the perfect time to have a look at what our future might hold, in terms of SEO specifically, and digital technology more generally.
So let’s begin what will be a two part series on machine learning and artificial intelligence, starting with the current developments in this very exciting field:
……………………………………………
“In the future, computers will see, hear, speak and even understand. Intelligent machines will form the backbone of what we call the invisible revolution: technologies interacting so seamlessly they become invisible.”
Patrice Simard [Microsoft]
Machine learning and artificial intelligence represent the next big step in technological development and in our interactions with our various mobile and desktop devices.
Recently, Google introduced Now on Tap to the latest android devices with the aim of raising the bar in the field of predictive and intuitive searching so that, ultimately, your device will know what you want it to do before you tell it.
Google have long been big players in the field but are soon to be joined by Apple who, despite arriving a little later to the party, have recently made movements that suggest that they are now throwing themselves into the ring as a serious contender.
As the battle for AI supremacy starts to heat up, let’s go over a few of the current developments in the field, and take a look at what the future might hold.

Not quite yet…
……………………
The basic aim here is to develop technology that essentially allows computers to act like humans; to interact with us seamlessly (or “invisibly”), all the while maintaining their capabilities that make them superior to us – the ability to make high speed calculations, or to connect to the internet, for example.
Microsoft have spoken of their “vision where machines augment human capabilities” on a day-to-day basis.
(Thankfully, current AI projects are less likely to create a robot that will take over the world, and more likely to make one that will make you a sandwich as soon as you’re hungry.)
Indeed Microsoft’s machine learning ventures have all been geared around this basic aim of helping out with day-to-day tasks, whether it’s helping you book a table at a restaurant, recognising your friend’s face in a picture, or talking to your pals in Portugal and your buddies in Botswana without needing a dictionary – handy.
Google shares these aims (among others), explaining that “much of our work on language, speech, translation, and visual processing relies on Machine Learning and AI.”

“What are those?” – Someone in the future, probably
The Race is On
Perhaps the most recognisable foray into artificial intelligence that we’ve seen recently has been the release of the various personal digital assistants from today’s various tech giants like Siri and Cortana. This is all in keeping with the kind of aims we’ve mentioned already, namely, making our lives that little bit easier.
Microsoft have evolved Cortana into a very personal personal digital assistant that really “learn[s] about your habits and quirks”, while their Skype Translator software that translates as you talk “provides endless possibilities for people around the world to communicate.”
And as of the latest iOS9 update, Apple’s Siri has been markedly improved, improving its answering capabilities, making its responses more nuanced and more accurate.
Google’s Now on Tap works as a pretty comprehensive digital assistant, supposedly ‘seeing’ your phone screen just as you do, offering a range of different suggested commands based on the text on the screen in any app. They are, however, yet to release a fully-fledged talking assistant to rival Cortana or Siri.
Their reasons here basically boil down to perfectionism. Amit Singhal, Google’s senior vice president, had this to say on the topic:
“One of the challenges with computer-based personalities is that human-based interactions are far more nuanced than can be encoded in an algorithm today.
“I deeply believe that we have to be very sensitive to human interaction, so we won’t do this lightly… And I think that technology will take a long time to develop.”
Despite this though, in terms of machine learning and AI generally, Google have long been at the forefront, with their Google Brain deep learning project, and their semi-secret and ominously-named Google X facility housing some incredibly out-there projects.
Google have spoken about using machine learning technology in various of their more outlandish ventures such as their self-driving car as well as in their more obviously practical enterprises like speech recognition.
Interestingly, it is Apple who have been oddly slow on the uptake in terms of fully developing Siri beyond a basic program that does little more than translate voice into text and search Google.
Until now, that is…
Enter Apple
It was recently revealed that Apple are currently in the process of hiring 86 full-time artificial intelligence experts, signalling a pretty solid drive in the direction of advanced machine learning. They’ve also purchased two AI-focused start-up companies.
The first of these is Perceptio, a company whose goal is: “To develop techniques to run AI image-classification systems on smartphones, without having to draw from large external repositories of data” according to reporters at Bloomberg.
Development of basic machine learning capabilities often requires, at the start, inputting huge amounts of data to develop algorithms that the computers can then eventually build on and make more complex themselves.
A basic example of this is the development of Google’s Panda update back in 2011. Panda was designed to make Google’s ranking algorithms more and more accurate when it came to rating the quality of a site. In order to develop it, they started by sitting down loads of human developers and getting them to evaluate thousands of websites themselves. This data was then fed into their computers and from it they developed algorithms that, in theory, allowed the computers to similarly evaluate websites in the future. (This is a vast simplification of how Panda was developed, but you get the idea)

An artist’s impression of one of Google’s employees after a long day evaluating websites
Now, Apple’s acquisition of Perceptio and use of their technology becomes particularly relevant in the context of the privacy debate between them and Google.
Google’s Now on Tap feature requires users to agree to submit vast amounts of their personal data in order to enjoy full functionality. “If Google doesn’t know where my office is…it can’t tell me ‘there is traffic – please leave now’” explains Singhal.
Now this kind of data sharing is at odds with Apple’s strict privacy policy; they make a big deal about not storing this kind of data (like users’ location history) systematically, instead anonymising everything and analysing it on an individual basis. The problem is, or rather was, that this made development of AI capabilities rather difficult since they did not have a huge bank of data to aggregate and draw conclusions from.
“They want to make a phone that responds to you very quickly without knowledge of the rest of the world. It’s harder to do that” said Joseph Gonzalez of machine learning start-up Dato.
Now, with Perceptio on board, that looks set to change…
Join us in part 2 when we look at the digital brain simulations, the future of SEO, and an attempt answering Philip K Dick’s question: ‘do androids dream of electric sheep?’ (no, it turns out they dream of squiggly lines and lots of dogs).
Leave a Response