A Convolutional Neural Network (CNN) is a type of deep learning algorithm primarily used for image processing, video analysis, and natural language processing. It uses convolutional layers with sliding windows to process data, and is particularly effective at identifying spatial hierarchies or patterns within data, making it excellent for tasks like image and speech recognition.
Researchers present a novel myoelectric control (MEC) framework employing Bayesian optimization to enhance convolutional neural network (CNN)-based gesture recognition systems using surface electromyogram (sEMG) signals. The study demonstrates improved accuracy and generalization, crucial for advancing prosthetic devices and human-computer interfaces, and highlights the potential for broader applications in diverse sEMG signal types and neural network architectures.
This article introduces LC-Net, a novel convolutional neural network (CNN) model designed for precise leaf counting in rosette plants, addressing challenges in plant phenotyping. Leveraging SegNet for superior leaf segmentation, LC-Net incorporates both original and segmented leaf images, showcasing robustness and outperforming existing models in accurate leaf counting, offering a promising advancement for agricultural research and high-throughput plant breeding efforts.
Scientists present a groundbreaking study published in Scientific Reports, introducing an intelligent transfer learning technique utilizing deep learning, particularly a convolutional neural network (CNN), to predict diseases in black pepper leaves. The research showcases the potential of advanced technologies in plant health monitoring, offering a comprehensive approach from dataset acquisition to the development of deep neural network models for early-stage leaf disease identification in agriculture.
Researchers unveil a groundbreaking approach to tackle escalating construction solid waste challenges through a machine vision (MV) algorithm. By automating the generation and annotation of synthetic datasets, the study significantly enhances efficiency and accuracy, demonstrating superior performance in construction waste sorting over manually labeled datasets, paving the way for sustainable urban waste management.
This paper unveils FaceNet-MMAR, an advanced facial recognition model tailored for intelligent university libraries. By optimizing traditional FaceNet algorithms with innovative features, including mobilenet, mish activation, attention module, and receptive field module, the model showcases superior accuracy and efficiency, garnering high satisfaction rates from both teachers and students in real-world applications.
Researchers harness Convolutional Neural Networks (CNNs) to enhance the predictability of the Madden-Julian Oscillation (MJO), a critical tropical weather pattern. Leveraging a 1200-year simulation and explainable AI methods, the study identifies moisture dynamics, particularly precipitable water anomalies, as key predictors, pushing the forecasting skill to approximately 25 days and offering insights into improving weather and climate predictions.
Researchers introduce the multi-feature fusion transformer (MFT) for named entity recognition (NER) in aerospace text. MFT, utilizing a unique structure and integrating radical features, outshines existing models, demonstrating exceptional performance and paving the way for enhanced AI applications in aerospace research.
In this article, researchers unveil a cutting-edge gearbox fault diagnosis method. Leveraging transfer learning and a lightweight channel attention mechanism, the proposed EfficientNetV2-LECA model showcases superior accuracy, achieving over 99% classification accuracy in both gear and bearing samples. The study signifies a pivotal leap in intelligent fault diagnosis for mechanical equipment, addressing challenges posed by limited samples and varying working conditions.
Korean researchers introduce a groundbreaking framework marrying Explainable AI (XAI) and Zero-Trust Architecture (ZTA) for robust cyberdefense in marine communication networks. Their deep neural network, Zero-Trust Network Intrusion Detection System (NIDS), not only exhibits remarkable accuracy in classifying cyber threats but also integrates XAI methodologies, SHAP and LIME, to provide interpretable insights. This innovative approach fosters transparency and collaboration between AI systems and human experts, promising enhanced cybersecurity in marine, and potentially other, critical infrastructures.
Researchers present YOLO_Bolt, a lightweight variant of YOLOv5 tailored for industrial workpiece identification. With optimizations like ghost bottleneck convolutions and an asymptotic feature pyramid network, YOLO_Bolt outshines YOLOv5, achieving a 2.4% increase in mean average precision (mAP) on the MSCOCO dataset. Specialized for efficient bolt detection in factories, YOLO_Bolt offers improved detection accuracy while reducing model size, paving the way for enhanced quality assurance in industrial settings.
Researchers present a groundbreaking integrated agricultural system utilizing IoT-equipped sensors and AI models for precise rainfall prediction and fruit health monitoring. The innovative approach combines CNN, LSTM, and attention mechanisms, demonstrating high accuracy and user-friendly interfaces through web applications, heralding a transformative era in data-driven agriculture.
Researchers present a meta-imager using metasurfaces for optical convolution, offloading computationally intensive operations into high-speed, low-power optics. The system employs angular and polarization multiplexing, achieving both positive and negative valued convolution operations simultaneously, showcasing potential in compact, lightweight, and power-efficient machine vision systems.
Researchers introduce a groundbreaking deep learning method, published in Medical Physics, to detect and measure motion artifacts in undersampled brain MRI scans. The approach, utilizing synthetic motion-corrupted data and a convolutional neural network, offers a potential safety measure for AI-based approaches, providing real-time alerts and insights for improved MRI reconstruction methods.
Researchers have unveiled innovative methods, utilizing lidar data and AI techniques, to precisely delineate river channels' bankfull extents. This groundbreaking approach streamlines large-scale topographic analyses, offering efficiency in flood risk mapping, stream rehabilitation, and tracking channel evolution, marking a significant leap in environmental mapping workflows.
This study introduces an AI-based system predicting gait quality progression. Leveraging kinematic data from 734 patients with gait disorders, the researchers explore signal and image-based approaches, achieving promising results with neural networks. The study marks a pioneering application of AI in predicting gait variations, offering insights into future advancements in this critical domain of healthcare.
This study proposes an innovative approach to enhance road safety by introducing a CNN-LSTM model for driver sleepiness detection. Combining facial movement analysis and deep learning, the model outperforms existing methods, achieving over 98% accuracy in real-world scenarios, paving the way for effective implementation in smart vehicles to proactively prevent accidents caused by driver fatigue.
Researchers unveil a pioneering approach using a convolutional neural network (CNN) to analyze extreme precipitation patterns' link to climate shifts. This CNN-based method, trained with data from 10,000 precipitation stations, overcomes limitations of traditional analyses, providing high-resolution maps and nuanced insights into the sensitivity of extreme precipitation to climate change for North America, Europe, Australia, and New Zealand.
Researchers unveil RVTALL, a groundbreaking multimodal dataset for contactless speech recognition. Integrating data from UWB and mmWave radars, depth cameras, lasers, and audio-visual sources, the dataset aids in exploring non-invasive speech analysis. The study demonstrates applications in silent speech recognition, speech enhancement, analysis, and synthesis, though it acknowledges limitations in sample size and diversity. The dataset stands as a robust tool for advancing research in speech-related technologies.
The RefCap model pioneers visual-linguistic multi-modality in image captioning, incorporating user-specified object keywords. Comprising Visual Grounding, Referent Object Selection, and Image Captioning modules, the model demonstrates efficacy in producing tailored captions aligned with users' specific interests, validated across datasets like RefCOCO and COCO captioning.
Researchers from Nanjing University of Science and Technology present a novel scheme, Spatial Variation-Dependent Verification (SVV), utilizing convolutional neural networks and textural features for handwriting identification and verification. The scheme outperforms existing methods, achieving 95.587% accuracy, providing a robust solution for secure handwriting recognition and authentication in diverse applications, including security, forensics, banking, education, and healthcare.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.