A Convolutional Neural Network (CNN) is a type of deep learning algorithm primarily used for image processing, video analysis, and natural language processing. It uses convolutional layers with sliding windows to process data, and is particularly effective at identifying spatial hierarchies or patterns within data, making it excellent for tasks like image and speech recognition.
Researchers harness the power of pseudo-labeling within semi-supervised learning to revolutionize animal identification using computer vision systems. They also explored how this technique leverages unlabeled data to significantly enhance the predictive performance of deep neural networks, offering a breakthrough solution for accurate and efficient animal identification in resource-intensive agricultural environments.
Researchers have introduced an innovative asymmetric hybrid encoder-decoder (AHED) deep learning (DL) algorithm designed for accurate multivariate time series forecasting of building energy consumption. The article, pending publication in Applied Energy, addresses the pressing need for effective energy management in buildings by harnessing advanced DL techniques to predict complex energy usage patterns.
Researchers have unveiled an innovative solution to the energy efficiency challenges posed by high-parameter AI models. Through analog in-memory computing (analog-AI), they developed a chip boasting 35 million memory devices, showcasing exceptional performance of up to 12.4 tera-operations per second per watt (TOPS/W). This breakthrough combines parallel matrix computations with memory arrays, presenting a transformative approach for efficient AI processing with promising implications for diverse applications.
This article presents an innovative approach that utilizes learned dynamic phase coding for reconstructing videos from single-motion blurred images. By integrating a convolutional neural network (CNN) and a learnable imaging layer, the proposed method overcomes challenges associated with motion blur in dynamic scene photography.
Researchers present LightSpaN, a streamlined Convolutional Neural Network (CNN)-based solution for swift and accurate vehicle identification in intelligent traffic monitoring systems powered by the Internet of Things (IoT). This innovative approach outperforms existing methods with an average accuracy of 99.9% for emergency vehicles, contributing to reduced waiting and travel times.
Researchers propose a hybrid model that integrates sentiment analysis using Word2vec and Long Short-Term Memory (LSTM) for accurate exchange rate trend prediction. By incorporating emotional weights from Weibo data and historical exchange rate information, combined with CNN-LSTM architecture, the model demonstrates enhanced prediction accuracy compared to traditional methods.
Researchers introduce a revolutionary method combining Low-Level Feature Attention, Feature Fusion Neck, and Context-Spatial Decoupling Head to enhance object detection in dim environments. With improvements in accuracy and real-world performance, this approach holds promise for applications like nighttime surveillance and autonomous driving.
Researchers delve into the realm of Citizen-Centric Digital Twins (CCDT), exploring cutting-edge technologies that enable predictive, simulative, and visualizing capabilities to address city-level issues through citizen engagement. The study highlights data acquisition methods, machine learning algorithms, and APIs, offering insights into enhancing urban management while fostering public participation.
This study presents an innovative pipeline for continuous real-time assessment of driver drowsiness levels using photoplethysmography (PPG) signals. The approach involves customized PPG sensors embedded in the steering wheel, coupled with a tailored deep neural network architecture for accurate drowsiness classification. Previous methods using ECG signals were susceptible to motion artifacts and complex preprocessing.
The DCTN model, combining deep convolutional neural networks and Transformers, demonstrates superior accuracy in hydrologic forecasting and climate change impact evaluation, outperforming traditional models by approximately 30.9%. The model accurately predicts runoff patterns, aiding in water resource management and climate change response.
Researchers propose the Fine-Tuned Channel-Spatial Attention Transformer (FT-CSAT) model to address challenges in facial expression recognition (FER), such as facial occlusion and head pose changes. The model combines the CSWin Transformer with a channel-spatial attention module and fine-tuning techniques to achieve state-of-the-art accuracy on benchmark datasets, showcasing its robustness in handling FER challenges.
Researchers from the CAS Institute of Atmospheric Physics developed an AI-powered model using deep learning algorithms that surpasses traditional methods in predicting central Pacific El Nino events, offering potential advancements in seasonal climate forecasting. The study highlights the significance of artificial intelligence in enhancing predictions of significant climate events, providing valuable insights for disaster preparedness and risk reduction worldwide.
Researchers propose an intelligent Digital Twin framework enhanced with deep learning to detect and classify human operators and robots in human-robot collaborative manufacturing. The framework improves reliability and safety by enabling autonomous decision-making and maintaining a safe distance between humans and robots, offering a promising solution for advanced manufacturing systems.
Large-scale multi-label dataset, Incidents1M, is introduced for incident detection and image-filtering experiments on social media. It enables timely understanding of natural disaster progression and aftermath using automated methods.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.