Emotion detection from user engagement has been a widely debated feature in recent years.
With the advent of new technologies, many tech companies and academic researchers have refined their thinking and approach to this hard engineering problem.
Apple’s research team is not far behind. The Cupertino giant has been working on this problem for years, as evident from a recent patent filing approved this week.
Recent success stories in automated object or face recognition, partly fueled by deep learning artificial neural network (ANN) architectures, have led to the advancement of biometric research platforms and, to some extent, the resurrection of Artificial Intelligence (AI).
Related reading:
- Apple exploring strain gauge sensors for Health applications
- The best Apple Watch models and features for seniors in 2023
- How to add your vision prescription to your iPhone’s Apple Health app
- Apple’s speech recognition algorithms could help people who stutter
Emotion detection is a multi-faceted process
Within this context, it turns out that automating emotion recognition is far from being straightforward, with several challenges arising for both science (e.g., methodology underpinned by psychology) and technology (e.g., the iMotions biometric research platform).
Research has shown that skin conductance is also a sensitive autonomic measure of emotional arousal (Boucsein et al., 2012).
The higher the arousal, the higher the skin conductance for both positive (“happy” or “joyful”) and negative (“threatening” or “saddening”) stimuli. Accessory and smartwatch makers have been studying this and introducing GSR sensors on wearables.
Apple’s latest research patent, ‘Emotion detection’ originally submitted in 2019, explores the feature using convolutional neural networks (CNNs). The main idea here is to examine various 2D images and run them through CNNs to detect the various emotions amplified in the image.
The patent highlights the machine learning-based techniques that can detect emotions and help build appropriate apps for VisionOS to be used in AR/VR environments.
Apple also addresses issues that may arise centered on user privacy issues and government regulation in this area.
For instance, in the US, collection of or access to certain health data may be governed by federal and/or state laws, such as the Health Insurance Portability and Accountability Act (HIPAA).
In contrast, health data in other countries may be subject to other regulations and policies and should be handled accordingly.
Hence, different privacy practices in each country should be maintained for different personal data types, complicating many facets surrounding this feature.
Apple has acquired a few companies to help with some of the engineering challenges in this area. The earliest of the acquisitions in this space was a Swedish company in 2010 called Polar Rose.
Apple’s computer vision-based application build-out started taking a more serious shape in 2016/2017, as evidenced by the numerous acquisitions.
The company first acquired RealFace, an Israeli startup, in 2017 to help build out the components for FaceID, which has become a very useful feature in the Apple ecosystem. The other company that Apple added to its portfolio was Emotient Inc in 2017, which uses AI to read people’s emotions.
When it comes to health, this technology can be powerful. In 2017, researchers showed how a system to detect facial expressions could help care for dementia patients suffering from pain.
With the advent of more powerful neural networks and associated chipsets, we are not far from seeing use cases on Apple’s products centered on emotion recognition.