A video capture and analytics platform has added three new emotional recognition features which are powered by artificial intelligence.

Liverpool-based LivingLens can now recognise and catalogue facial emotions, the tone of speech, and inanimate objects.

The newly introduced features are made possible by the latest AI. The first feature identifies key landmarks and expressions on the human face and a collection of deep learning algorithms analyse this information in an attempt to interpret the emotion on the face of the person being analysed.

The company has also introduced a new ’tonal recognition’ feature which attempts to understand and classify the tone of speech. 

The last addition analyses objects to improve context. Objects help to determine where consumers are, be that in a shop, at the airport or in a kitchen for example, and therefore what they are doing.

The company hopes that these new AI-powered features will allow large-scale studies of human emotion and communication without the need for human interpretation.

“We are delighted with the latest additions to our existing suite of capabilities, which provide a lens into the all-important emotions of consumers and gives additional context to consumers’ content through their surroundings,” said Carl Wong, LivingLens CEO.

“Historically video has been challenging to work with but, we are seeing the use of video expand as technology continues to develop and improve, providing high levels of accuracy which previously would have required human intervention.

“Where once video was limited to small scale studies, it’s exciting to see projects with large volumes which simply weren’t practical before.”

The company has provided video data mining technology for leading brands such as Unilever, Vine and Carphone Warehouse.