Computer Vision is a field of artificial intelligence that trains computers to interpret and understand the visual world. By using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objects, and then react to what they "see."
Researchers developed low-cost AI-enabled camera traps with on-site continual learning, significantly improving real-time wildlife monitoring accuracy in diverse environments.
Researchers developed the "Deepdive" dataset and benchmarked deep learning models to automate the classification of deep-sea biota in the Great Barrier Reef, achieving significant accuracy with the Inception-ResNet model.
This study compares four computer vision algorithms on a Raspberry Pi 4 platform for depalletizing applications. The analysis highlights pattern matching, SIFT, ORB, and Haar cascade methods, emphasizing low-cost, efficient object detection suitable for industrial and small-scale automation environments.
The novel SBDet model introduces a relaxed rotation-equivariant network (R2Net) that improves object detection in scenarios with symmetry-breaking or non-rigid transformations. This innovation offers greater accuracy and robustness in real-world visual tasks like autonomous driving and geosciences.
A multiplatform computer vision system was developed to assess schoolchildren's physical fitness using smartphones. This system demonstrated high accuracy in field and lab tests, providing a reliable and user-friendly tool for fitness evaluation in educational environments.
Researchers introduced innovative computer vision techniques to the maritime industry, incorporating ensemble learning and domain knowledge. These methods significantly improve detection accuracy and optimize video viewing on vessels, offering advancements for marine operations and communication.
This research introduces a framework for verifying Lyapunov-stable neural network controllers, advancing robot safety in dynamic, sensor-driven environments.
Researchers utilized computer vision and machine learning to develop an objective method for evaluating the color quality of needle-shaped green tea. The study showed that the DT-Adaboost model accurately assessed tea quality, offering a reliable and efficient alternative to traditional sensory analysis.
This study presents a computer vision model that non-invasively tracks mouse body mass from video data, achieving a mean error of just 5%. The approach enhances research quality by eliminating manual weighing, reducing stress, and improving animal welfare.
This paper explores advanced drowning prevention technologies that integrate embedded systems, artificial intelligence (AI), and the Internet of Things (IoT) to enhance real-time monitoring and response in swimming pools. By utilizing computer vision and deep learning for accurate situation identification and IoT for real-time alerts, these systems significantly improve rescue efficiency and reduce drowning incidents
Researchers developed TeaPoseNet, a deep neural network for estimating tea leaf poses, focusing on the Yinghong No.9 variety. Trained on a custom dataset, TeaPoseNet improved pose recognition accuracy by 16.33% using a novel algorithm, enhancing tea leaf analysis.
A systematic tertiary study analyzed 57 secondary studies from 2018 to 2023 on using drone imagery for infrastructure management. The research identified key application areas, assessed trends, and highlighted challenges, providing a valuable reference for researchers and practitioners in the field.
Researchers developed a three-step computer vision framework using YOLOv8 and image processing techniques for efficient concrete crack detection and measurement. The method demonstrated high accuracy but faced challenges with small cracks, complex backgrounds, and pre-marked reference frames.
An innovative AI-driven platform, HeinSight3.0, integrates computer vision to monitor and analyze liquid-liquid extraction processes in real-time. Utilizing machine learning for visual cues like liquid levels and turbidity, this system significantly optimizes LLE, paving the way for autonomous lab operations.
A scaleless monocular vision method accurately measures plant heights by converting color images to binary data. Achieving high precision within 2–3 meters and minimal error, this non-contact technique demonstrates potential for reliable plant height measurement under varied lighting conditions.
A study in Computers & Graphics examined model compression methods for computer vision tasks, enabling AI techniques on resource-limited embedded systems. Researchers compared various techniques, including knowledge distillation and network pruning, highlighting their effectiveness in reducing model size and complexity while maintaining performance, crucial for applications like robotics and medical imaging.
Researchers leverage AI to optimize the design, fabrication, and performance forecasting of diffractive optical elements (DOEs). This integration accelerates innovation in optical technology, enhancing applications in imaging, sensing, and telecommunications.
Researchers have developed an automated system using computer vision (CV) and a collaborative robot (cobot) to objectively assess the rehydration quality of infant formula by measuring foam height, sediment height, and white particles. The system's accuracy in estimating these attributes closely matched human ratings, offering a reliable alternative for quality control in powdered formula rehydration.
Researchers developed an automated system using computer vision and machine learning to detect early-stage lameness in sows. The system, trained on video data and evaluated by experts, accurately tracked key points on sows' bodies, providing a precise livestock farming tool to assess locomotion and enhance animal welfare.
Generative adversarial networks (GANs) have transformed generative modeling since 2014, with significant applications across various fields. Researchers reviewed GAN variants, architectures, validation metrics, and future directions, emphasizing their ongoing challenges and integration with emerging deep learning frameworks.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.