Silent Speech Interface Using Graphene-Based Textile Strain Sensors and AI

In an article published in the journal Nature, researchers introduced a novel silent speech interface (SSI) using few-layer graphene (FLG) strain sensing technology and artificial intelligence (AI)-based self-adaptation.

a Confusion matrix showing the classification results for the 10 most frequently used verbs and 10 most frequently used nouns, indicating the model’s capability in everyday use. b Confusion matrix for the classification of 10 words that are easily confused in terms of vowels, consonants, or stress patterns, demonstrating the model’s ability to discern subtle differences. c Relevance-Class Activation Mapping (R-CAM) is utilized to highlight the signal areas the model focuses on during word classification. d Confusion matrix for the classification of 5 long words read at varying speeds, showcasing the model’s robustness to different reading speeds. e Visualization of the long word “Cambridge” read at three different speeds. Image Credit: https://www.nature.com/articles/s41528-024-00315-1
a Confusion matrix showing the classification results for the 10 most frequently used verbs and 10 most frequently used nouns, indicating the model’s capability in everyday use. b Confusion matrix for the classification of 10 words that are easily confused in terms of vowels, consonants, or stress patterns, demonstrating the model’s ability to discern subtle differences. c Relevance-Class Activation Mapping (R-CAM) is utilized to highlight the signal areas the model focuses on during word classification. d Confusion matrix for the classification of 5 long words read at varying speeds, showcasing the model’s robustness to different reading speeds. e Visualization of the long word “Cambridge” read at three different speeds. Image Credit: https://www.nature.com/articles/s41528-024-00315-1

Demonstrated in a biocompatible textile-integrated strain sensor embedded into a smart choker, it achieved high accuracy, computational efficiency, and decoding speed while ensuring user comfort. Through ordered thorough cracks in graphene, the strain gauge achieved superior sensitivity, enabling efficient neural network-based speech decoding.

Background

SSI have emerged as crucial tools for facilitating communication in environments where traditional verbal communication is impractical or impossible, such as noisy environments or scenarios involving speech impairment due to medical conditions. Previous research in SSI has primarily focused on decoding speech from brain activity using invasive techniques like electroencephalography (EEG) and electrocorticography (ECoG), or from lip movements through computer vision-based methods.

However, these approaches face practical challenges in terms of wearability, invasiveness, and computational complexity. Noninvasive alternatives, such as electromyography (EMG) and strain sensors, show promise for wearable applications but often compromise user comfort, signal accuracy, and computational efficiency.

This paper introduced a novel approach to wearable SSI by combining an ultrasensitive textile-based strain sensor with a specially designed neural network. The sensor's unique design, featuring ordered cracks in a graphene-coated textile, enhanced sensitivity, allowing for the capture of detailed speech signals.

This was complemented by a lightweight neural network architecture optimized for efficient speech recognition, bridging the gap between user comfort and technical effectiveness. By addressing the limitations of existing wearable SSI systems, this research aimed to pave the way for seamless and natural silent communication in diverse settings.

Fabrication and Characterization of Graphene-Based Textile Strain Sensor

The research utilized synthetic graphite with a particle size of 25μm (TIMREX KS 25), sodium deoxycholate (SDC), sodium carboxymethyl cellulose (CMC-NA), and a textile substrate to fabricate a graphene-based strain sensor. The graphene ink was prepared through a liquid-phase exfoliation method involving mixing, exfoliation, and stabilization.

Fabrication of the strain sensor on the textile substrate was achieved via screen printing, with ultraviolet (UV) ozone treatment to enhance adhesion. The printing process was repeated to create ordered cracks in the graphene layer, followed by annealing and pre-stretching to optimize sensitivity. Characterization involved atomic force microscopy (AFM) and scanning electron microscope (SEM) imaging to assess graphene flakes and sensor morphology, respectively.

Electromechanical properties were evaluated using a tensile testing machine, while resistance responses were measured under strain. Data acquisition involved affixing the sensor to a choker and using a potentiostat for signal recording. The authors ensured ethical compliance by obtaining approval from the University of Cambridge's Research Ethics Committee and obtaining informed consent from participants.

The software environment for data processing and network training utilized Python, Miniconda, PyTorch, and Apple's Metal Performance Shaders for training acceleration. Noise injection and hyperparameter optimization were integral parts of the training process, ensuring robust model performance.

Performance and Validation

In the study, a highly sensitive textile strain sensor with ordered cracks was developed to capture nuanced throat micromovements associated with speech. The sensor's ultrahigh sensitivity allowed for the detection of subtle deformations, crucial for distinguishing between words with similar pronunciations. Unlike traditional sensors, the graphene-based design exhibited superior performance within small strain ranges, paving the way for accurate word recognition.

Fabricated through a simple, biocompatible process, the sensor's performance was characterized, demonstrating linear resistance response and excellent stability over prolonged use. With its conformability and durability, the sensor showcased promise for real-world applications.

Furthermore, a lightweight end-to-end neural network was devised for speech recognition, leveraging the sensor's high-density data without compromising computational efficiency. Unlike previous methods, the network's one-dimensional (1D) approach maintained accuracy while minimizing computational demand, ideal for wearable devices.

In real-world scenarios, the system exhibited remarkable accuracy in classifying common words, confusable word pairs, and words spoken at varying speeds. Visualizations confirmed the model's robustness to noise and variations in wear positioning. Moreover, the system demonstrated impressive generalization capabilities, accurately recognizing new users and words with minimal fine-tuning, showcasing its potential for diverse applications.

Conclusion

In conclusion, the fusion of ultrasensitive textile strain sensors with a lightweight neural network marked a significant leap forward in SSI technology. This innovative integration offered not only high accuracy and computational efficiency but also ensured user comfort, thereby revolutionizing communication in challenging environments.

By effectively addressing the limitations of previous systems, this technology held immense potential for facilitating seamless and natural silent communication across diverse settings. The synergy between advanced sensor design and optimized neural network architecture promised extensive applications, representing a milestone in wearable technology and silent speech recognition, poised to reshape the landscape of human-computer interaction.

Journal reference:
Soham Nandi

Written by

Soham Nandi

Soham Nandi is a technical writer based in Memari, India. His academic background is in Computer Science Engineering, specializing in Artificial Intelligence and Machine learning. He has extensive experience in Data Analytics, Machine Learning, and Python. He has worked on group projects that required the implementation of Computer Vision, Image Classification, and App Development.

Citations

Please use one of the following formats to cite this article in your essay, paper or report:

  • APA

    Nandi, Soham. (2024, May 15). Silent Speech Interface Using Graphene-Based Textile Strain Sensors and AI. AZoAi. Retrieved on October 31, 2024 from https://www.azoai.com/news/20240515/Silent-Speech-Interface-Using-Graphene-Based-Textile-Strain-Sensors-and-AI.aspx.

  • MLA

    Nandi, Soham. "Silent Speech Interface Using Graphene-Based Textile Strain Sensors and AI". AZoAi. 31 October 2024. <https://www.azoai.com/news/20240515/Silent-Speech-Interface-Using-Graphene-Based-Textile-Strain-Sensors-and-AI.aspx>.

  • Chicago

    Nandi, Soham. "Silent Speech Interface Using Graphene-Based Textile Strain Sensors and AI". AZoAi. https://www.azoai.com/news/20240515/Silent-Speech-Interface-Using-Graphene-Based-Textile-Strain-Sensors-and-AI.aspx. (accessed October 31, 2024).

  • Harvard

    Nandi, Soham. 2024. Silent Speech Interface Using Graphene-Based Textile Strain Sensors and AI. AZoAi, viewed 31 October 2024, https://www.azoai.com/news/20240515/Silent-Speech-Interface-Using-Graphene-Based-Textile-Strain-Sensors-and-AI.aspx.

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
HeinSight30 Uses Computer Vision for Liquid Extraction