This innovation centred around the personalized skin-integrated facial interface (PSiFI) system, is poised to transform industries by enabling next-generation wearable systems to offer services based on emotional cues.
Researchers at the Department of Material Science and Engineering at UNIST have developed a technology capable of recognizing human emotions in real-time. This innovation is set to revolutionize several industries, particularly in developing next-generation wearable systems that offer services based on emotional cues. Understanding and extracting emotional information accurately has been a longstanding challenge due to the abstract and ambiguous nature of human emotions, moods, and feelings. To tackle this challenge, the research team has devised a multi-modal human emotion recognition system that integrates verbal and non-verbal expression data to utilize comprehensive emotional information effectively.
Central to this system is the personalized skin-integrated facial interface (PSiFI), which is self-powered, flexible, stretchable, and transparent. It features a unique bidirectional triboelectric strain and vibration sensor, enabling simultaneous sensing and integration of verbal and non-verbal expression data. The system fully has a data processing circuit for wireless data transfer, facilitating real-time emotion recognition.
The technology uses machine learning algorithms to demonstrate precise and real-time human emotion recognition tasks, even when individuals wear masks. The system has already been successfully deployed in a digital concierge application within a virtual reality (VR) environment. The technology operates on the principle of “friction charging,” where objects separate into positive and negative charges upon friction. The system is self-generating, requiring no external power source or complex measuring devices for data recognition.
Professor Kim remarked, “Based on these technologies, we have developed a skin-integrated face interface (PSiFI) system that can be customized for individuals.” The team utilized a semi-curing technique to manufacture a transparent conductor for the friction-charging electrodes. A personalized mask was also created using a multi-angle shooting technique, combining flexibility, elasticity, and transparency. The research team successfully integrated the detection of facial muscle deformation and vocal cord vibrations, enabling real-time emotion recognition. The system’s capabilities were demonstrated in a virtual reality “digital concierge” application, where customized services based on users’ emotions were provided.
The team stated, with this developed system, it is possible to implement real-time emotion recognition with just a few learning steps and without complex measurement equipment. This opens up possibilities for portable emotion recognition devices and next-generation emotion-based digital platform services in the future. They conducted real-time emotion recognition experiments, collecting multimodal data such as facial muscle deformation and voice. The system exhibited high emotional recognition accuracy with minimal training. Its wireless and customizable nature ensures wearability and convenience.
The system’s application in VR environments allows for its use as a “digital concierge” in various settings, including smart homes, private movie theaters, and smart offices. The system’s ability to identify individual emotions in different situations enables the provision of personalized recommendations for music, movies, and books. The researchers emphasized, “For effective interaction between humans and machines, human-machine interface (HMI) devices must be capable of collecting diverse data types and handling complex integrated information. This study exemplifies the potential of using emotions, which are complex forms of human information, in next-generation wearable systems.”