- Artificial intelligence (AI) is already a big part of our everyday lives through search and other web technologies.
- Companies like Microsoft and Google are pouring lots of time, effort, and funds into furthering AI research.
- There will always be people who try to manipulate AI in unexpected ways.
- Nonetheless, the promise of search-based AI as a useful tool will inspire developers to continue improving it.
If there is any redeeming value to be gleaned from the meteoric crash of Tay.ai – Microsoft’s “teenage” chatbot that started spewing racist, sexist, and homophobic messages on Twitter after mere hours of operation – it’s that artificial intelligence (AI) is not yet mature enough to take over the world. We can sigh with relief that no HAL, Skynet, or Cylons have been invented…yet.
Looking past the spectacle of it all, however, there is a more practical takeaway as well. Experiments like Tay are just the tip of the iceberg when it comes to the role that AI is already playing in our everyday lives, especially when it comes to the results presented to us by search engines. Given how often people search, and the things they search for, it’s sobering to think that the technology behind it can, perhaps, be as easily manipulated – and corrupted – as Tay.
Tay and Xiaoice
Initial media announcements about Tay compared the experiment to another chatbot that Microsoft released to a Chinese audience in 2014. Introduced on the Twitter-like microblogging platform Weibo, Xiaoice (which translates to “Little Bing”) has largely been considered a success, with tens of millions of registered users interacting with the chatbot multiple times per day. One New York Times article from last July highlighted the fondness of many users toward Xiaoice – a number of whom have even declared their love for the bot.
Which begs the question: Where did Tay go wrong?
Two days after setting Tay loose on the world, Microsoft posted its “learnings” on the company’s official blog. According to the post, the biggest problem was that Tay was overwhelmed by nefarious evildoers.
Unfortunately, in the first 24 hours of coming online, a coordinated attack by a subset of people exploited a vulnerability in Tay. Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack.
Without knowing the technical details behind the alleged vulnerability, it’s difficult to know how much of a mistake the company actually made here. Given that the chatbot was accessible through Twitter, the “coordinated attack” has more of a social engineering feel – less hacking into a bank account and more convincing a customer service rep over the phone that you really are the account owner.
Given that humans are fooled by such techniques all the time, the possibility that an AI was socially engineered leads to perhaps one of the scariest (and most interesting) conclusions of all:
Tay might have performed exactly as intended.
Project Oxford and Bing Predicts
But what does all this have to do with search?
For one thing, the Bing team was involved in the development of Tay. Undoubtedly, one of the many things the team hoped to learn from user interactions with Tay was how their own search algorithms could provide more relevant search results using natural language.
Microsoft’s AI research isn’t limited to chatbot experiments. The company has at least two other AI projects that are paving the way for the future of search and, more generally, human interaction with technology.
The first of these is Project Oxford, a suite of APIs created by Microsoft to help developers access AI. The project features AI-driven apps that include everything from the functional Bing image search and facial recognition software to more whimsical programs like “MyMoustache,” a game that lets users rate their overlip facial hair (or add some, if they don’t have any…).
The second project is Bing Predicts, a service that uses Bing search results, social media, and other data to predict various events – including everything from the presidential race to the winners of TV contest shows like Dancing with the Stars.
On March 24, in the midst of some of Tay’s most heinous tweets, Bing Predicts was busy calling the results of the NCAA Basketball Tournament games for the Sweet Sixteen round and on. As it turns out, for the Men’s Division, Bing broke even on its predictions for the Sweet Sixteen and called a measly one out of four games correctly for the Elite Eight (although, it did better in earlier rounds). For the Women’s Division, Bing’s predictions were slightly worse for the corresponding rounds.
That’s not to say I did any better in my own brackets. (In fairness, who expected Syracuse to get to the Final Four? Go ‘Cuse!) But it does demonstrate that there is still a long way to go if the promise of AI is to provide better results than good, old-fashioned natural intelligence does.
RankBrain: Google Search’s AI
Ironically enough, on the same day that Microsoft released Tay, search strategist Andrey Lipattsev reaffirmed in a Google Q&A that RankBrain is one of the top three factors in determining search results. This wasn’t a new revelation, but it did validate the idea that a greater part of Google’s search algorithm is being moderated by something well beyond most of our understanding.
As Lippattsev explains about a half-hour into the Q&A:
We are trying to get better at understanding natural language and at applying machine learning, and saying, “So what are the meanings behind the inputs?”
RankBrain is the AI engine driving those attempts to better understand natural language. Lippattsev indicates that this effort is especially important as more and more people start using voice search, which naturally includes more stop words and phrases that may supply semantic meaning.
Unlike Bing’s public AI failures, however, Google’s RankBrain works largely in the background, where it can be tweaked regularly and where potential snafus can be detected early. What we don’t know, however, is to what extent Google actually is achieving a better understanding of natural language. Nor do we know to what extent it may be subject to manipulation or coordinated attacks, like Tay.
AI Search in the Future
Success is often driven by failure. I expect that we’ll continue to see experiments like Tay from Microsoft and others, and those experiments will likely become more targeted at specific applications. Google is almost certainly using its machine learning technologies beyond search to better map out routes in Google Maps, identify spam in GMail, and so forth.
Longer-term possibilities are that the AI being developed now for search could be used in some of the “moonshot” projects run by Alphabet, Google’s parent company, and other programs like them. Imagine, for example, Calico using search AI to absorb every medical paper ever written, giving it the ability to find previously missed connections and discover new ways to fight age-related diseases.
It’s hard to say when that might happen, but it’s probably not far away – and it’s not limited to technology and science, either. A Japanese AI recently wrote a novel that passed the first round of a literary competition, and there’s already a program running around that can write its own Shakespearean sonnets (and other forms of verse). As a writer, I can’t help but wonder when we’ll witness AIs winning Pulitzer Prizes and Nobel Prizes for Literature – as well as other categories.
The one thing I can say for sure is that, no matter how good AI gets, there always will be those who will prod the machines and find the weak spots. A lot of it will be relatively innocent fun, but there will likely be plenty of mistakes along the way as well, from Tay-level embarrassments to more subtle manipulations designed to rank website content better.
I just hope nobody gets the bright idea to start a game of Global Thermonuclear War.