AI is employed in image processing to enhance and manipulate images through various techniques like denoising, super-resolution, and image restoration. Deep learning models and algorithms enable improved image quality, object recognition, and advanced image editing capabilities for a wide range of applications including photography, medical imaging, and computer vision.
Researchers developed low-cost AI-enabled camera traps with on-site continual learning, significantly improving real-time wildlife monitoring accuracy in diverse environments.
Researchers developed an automated system that animates children's drawings by addressing unique artistic styles, supported by a large, annotated dataset of over 178,000 images.
This paper explores advanced drowning prevention technologies that integrate embedded systems, artificial intelligence (AI), and the Internet of Things (IoT) to enhance real-time monitoring and response in swimming pools. By utilizing computer vision and deep learning for accurate situation identification and IoT for real-time alerts, these systems significantly improve rescue efficiency and reduce drowning incidents
Researchers developed TeaPoseNet, a deep neural network for estimating tea leaf poses, focusing on the Yinghong No.9 variety. Trained on a custom dataset, TeaPoseNet improved pose recognition accuracy by 16.33% using a novel algorithm, enhancing tea leaf analysis.
A recent study introduced an AI-based approach using transformer + UNet and ResNet-18 models for rock strength assessment and lithology identification in tunnel construction. The method showed high accuracy, reducing errors and enhancing safety and efficiency in geological engineering.
A systematic tertiary study analyzed 57 secondary studies from 2018 to 2023 on using drone imagery for infrastructure management. The research identified key application areas, assessed trends, and highlighted challenges, providing a valuable reference for researchers and practitioners in the field.
Researchers developed a three-step computer vision framework using YOLOv8 and image processing techniques for efficient concrete crack detection and measurement. The method demonstrated high accuracy but faced challenges with small cracks, complex backgrounds, and pre-marked reference frames.
The study compared various machine-learning models for predicting wind-solar tower power output. While linear regression was inadequate, polynomial regression and deep neural networks (DNN) showed improved accuracy. The DNN model outperformed others, demonstrating high prediction accuracy and efficiency for renewable energy forecasting.
A study in Scientific Reports introduced a deep learning-based, non-contact system for coal gangue sorting, significantly improving accuracy and efficiency. Utilizing a ResNet-50 network, the system achieves over 97% recognition accuracy and a sorting rate exceeding 91%, with separation times under 3 seconds.
Researchers developed a deep learning (DL) approach for non-destructive crop moisture assessment using thermal imagery, focusing on five DL models. Among them, MobilenetV3 excelled in accuracy and speed, demonstrating the potential for real-time water stress monitoring in cotton agriculture, enhancing precision irrigation strategies.
Researchers developed a neural network (NN) architecture based on You Only Look Once (YOLO) to automate the detection, classification, and quantification of mussel larvae from microscopic water samples.
Researchers introduced "DeepRFreg," a hybrid model combining deep neural networks and random forests, significantly enhancing particle identification (PID) in high-energy physics experiments. This innovation improves precision and reduces misidentification in particle detection.
Researchers have developed a bridge inspection method using computer vision and augmented reality (AR) to enhance fatigue crack detection. This innovative approach utilizes AR headset videos and computer vision algorithms to detect cracks, displaying results as holograms for improved visualization and decision-making.
Researchers applied deep learning (DL) models, including ResNet-34, to segment canola plants from other species in the field, treating non-canola plants as weeds. Using datasets containing 3799 canola images, the study demonstrated that ResNet-34 achieved superior performance, highlighting its potential for precision agriculture and innovative weed control strategies.
Researchers developed an automated system utilizing UAVs and deep learning to monitor and maintain remote gravel runways in Northern Canada. This system accurately detects defects and evaluates runway smoothness, proving more effective and reliable than traditional manual methods in harsh and isolated environments.
A systematic review in the journal Sensors analyzed 77 studies on facial and pose emotion recognition using deep learning, highlighting methods like CNNs and Vision Transformers. The review examined trends, datasets, and applications, providing insights into state-of-the-art techniques and their effectiveness in psychology, healthcare, and entertainment.
A comprehensive review highlights the evolution of object-tracking methods, sensors, and datasets in computer vision, guiding developers in selecting optimal tools for diverse applications.
Researchers developed an advanced automated system for early plant disease detection using an ensemble of deep-learning models, achieving superior accuracy on the PlantVillage dataset. The study introduced novel image processing and data balancing techniques, significantly enhancing model performance and demonstrating the system's potential for real-world agricultural applications.
A novel framework combining deep learning and preprocessing algorithms significantly improved particle detection in manufacturing, addressing challenges posed by heterogeneous backgrounds. The framework, validated through extensive experimentation, enhanced in-situ process monitoring, offering robust, real-time solutions for diverse industrial applications.
Researchers harness convolutional neural networks (CNNs) to recognize Shen embroidery, achieving 98.45% accuracy. By employing transfer learning and enhancing MobileNet V1 with spatial pyramid pooling, they provide crucial technical support for safeguarding this cultural art form.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.