The Telegraph – Facebook shut down a pair of its artificial intelligence robots after they invented their own, creepy language.
Researchers at Facebook Artificial Intelligence Research built a chatbot earlier this year that was meant to learn how to negotiate by mimicking human trading and bartering.
But when the social network paired two of the programs, nicknamed Alice and Bob, to trade against each other, they started to learn their own bizarre form of communication.
The chatbot conversation “led to divergence from human language as the agents developed their own language for negotiating,” the researchers said.
“Agents will drift off understandable language and invent codewords for themselves,” FAIR visiting researcher Dhruv Batra said. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands.”
So this popped up on my timeline today and dammit if this isn’t some wild information to start your day off. Before you go armoring up at your nearest gun shop for the apocalypse, here was another source breaking down the reality of it.
Gizmodo – The reality is somewhat more prosaic. A few weeks ago, FastCo Design did report on a Facebook effort to develop a “generative adversarial network” for the purpose of developing negotiation software.
The two bots quoted in the above passage were designed, as explained in a Facebook Artificial Intelligence Research unit blog post in June, for the purpose of showing it is “possible for dialog agents with differing goals (implemented as end-to-end-trained neural networks) to engage in start-to-finish negotiations with other bots or people while arriving at common decisions or outcomes.”
The bots were never doing anything more nefarious than discussing with each other how to split an array of given items (represented in the user interface as innocuous objects like books, hats, and balls) into a mutually agreeable split.
The intent was to develop a chatbot which could learn from human interaction to negotiate deals with an end user so fluently said user would not realize they are talking with a robot, which FAIR said was a success.
When Facebook directed two of these semi-intelligent bots to talk to each other, FastCo reported, the programmers realized they had made an error by not incentivizing the chatbots to communicate according to human-comprehensible rules of the English language. In their attempts to learn from each other, the bots thus began chatting back and forth in a derived shorthand—but while it might look creepy, that’s all it was.
So in a nut shell, because these robots had no way of incentivizing their communication in English with one another, they went all Westworld and decided to learn off of their own interaction and create their own form of language. We’re fine, just like I thought…
The author of this did place an importance that this was a human error and that the only way things like this could even happen again is if people were just too negligent to realize it (and that hopefully these types of AI aren’t plugged into more dangerous situations with weapons and shit).
Definitely a crazy concept to wrap your head around though. To think how advanced of a society we as humans are to be able to create technologies that can negotiate and barter with real people; and at the same time can jeopardize humanity by simple errors on the front-end.
On second thought, why don’t we just drop this whole AI concept – you know? Humans still need jobs. I don’t really need robots replacing real people.
Warning: A non-numeric value encountered in /home/themil15/public_html/wp-content/themes/Pointed/includes/modules/module-block.php on line 1511