How Seinfeld is helping AI understand your sarcasm
Have you ever asked Siri its opinion on the film Inception? Or whether it knows a good place to hide a dead body?
'Inception is about dreaming about dreaming about dreaming about dreaming about something or other. I fell asleep’, is one of your answers.
‘What, again?’ is your other.
The answers are pre-programmed, rather than genuine reactions capable of passing the Turing test, but hint at one of the ultimate aims of artificial intelligence – creating machines we can chat with and which understand levels of irony.
Lotem Peled, based in Israel, is a data scientist for Gong, which coaches sales teams by recording, transcribing, and analysing calls using AI powered conversation intelligence.
She has designed neural networks that aim to generate human-like speech patterns using varied data sources such as Seinfeld monologues.
Ultimately, the programme, which started as an academic research at the Technion Israel Institute of Technology, should be able to detect, and react to, the lowest form of wit – human sarcasm.
Peled explained: “When we need to build a language model we need our system to learn from conversation-like data, where people interrupt each other, mumble and say ‘um’ and ‘mmm’.
“Movie transcripts and Seinfeld monologues include these elements, and they are probably as human as you can go. They massively improved our results.
“I hope that my research is a stepping stone for making machines a bit more human, and helping them understand our language in a more refined way.
“Subjects like non literal communication – analogies, humour, irony, sarcasm – are an open-ended question in the world of AI and I hope that my sarcasm interpretation will encourage researchers to address some of these questions.”
Her SIGN system (Sarcasm Sentimental Interpretation GeNerator) is able to generate a non-sarcastic interpretation from a sarcastic comment.
She added: “Currently, as far as I know, while [personal assistant bots like Siri] sometimes answer sarcastically (or humorously/ironically), they aren’t able to understand sarcastic commands from their users.
“While in this setting, I don’t feel there is a great need for this technology with these programs seeing that consumers will most likely speak to their personal bot in a demanding matter rather than in a sarcastic tone.
“On the other hand, AI technology, and our relationship with this technology, is ever-changing so if the time comes when this will be necessary, this technology could be applicable across the board.”
Lotem explained the system, which Gong is considering using, could be useful for any field with spoken sarcasm. During a sales conversation, if a customer expresses an opinion sarcastically, an interpretation system may help reveal their true intention.
On the other hand, it can help us figure out if sarcastic remarks on the sales rep’s side encourage or discourage closing deals.
But aside from sales industries, companies who “truly rely on a customer review system such as Yelp and Amazon, can generate accurate ratings and feedback for the analyst these companies have in place”.
Gong in action
Lotem said: “On a consumer level, the Sarcasm SIGN research will help people who have trouble understanding non-literal communication such as someone with autism or Asperger’s syndrome.
“Also, similar to the benefits on a business level (Yelp and Amazon), it will also help the consumer decipher a more accurate sense of a product/service review.”
Describing the SIGN technology, she added: “Given the sarcastic expression: ‘The new fast and furious movie is the best, not’, our system would generate the honest interpretation: ‘The new fast and furious movie is not good’.
“This is achieved by training our system on a database of sarcastic tweets (marked with #sarcasm). We target sentiment words (“Best”, “Awesome”, etc..) because that’s usually where the sarcasm underlies. “
The Sarcasm SIGN paper was accepted to ACL 2017 (Association for Computational Linguistics), the largest annual Neuro Linguistic Programming convention, which will take place in August 2017 in Vancouver, Canada. Data and code will become publicly available in the following months.