Computer Vision is a field of artificial intelligence that trains computers to interpret and understand the visual world. By using digital images from cameras and videos and deep learning models, machines can accurately identify and classify objects, and then react to what they "see."
Researchers present ReAInet, a novel vision model aligning with human brain activity based on non-invasive EEG recordings. The model, derived from the CORnet-S architecture, demonstrates higher similarity to human brain representations, improving adversarial robustness and capturing individual variability, thereby paving the way for more brain-like artificial intelligence systems in computer vision.
Researchers introduce the Event-Based Segmentation Dataset (ESD), offering a groundbreaking solution to challenges in object segmentation. Leveraging event-based cameras and a meticulously designed experimental setup, ESD provides a high-quality 3D spatial-temporal dataset, addressing limitations in conventional cameras and paving the way for advancements in neuromorphic vision-based segmentation algorithms.
Chinese researchers introduce an innovative model utilizing computer vision and deep learning to recognize nine distinct behaviors of beef cattle in real-time. Enhancing the YOLOv8 algorithm with dynamic snake convolution and BiFormer attention mechanisms, the model achieves remarkable accuracy, demonstrating adaptability in various scenarios, including diverse lighting conditions and cattle densities.
This research explores the factors influencing the adoption of ChatGPT, a large language model, among Arabic-speaking university students. The study introduces the TAME-ChatGPT instrument, validating its effectiveness in assessing student attitudes, and identifies socio-demographic and cognitive factors that impact the integration of ChatGPT in higher education, emphasizing the need for tailored approaches and ethical considerations in its implementation.
Researchers present a novel myoelectric control (MEC) framework employing Bayesian optimization to enhance convolutional neural network (CNN)-based gesture recognition systems using surface electromyogram (sEMG) signals. The study demonstrates improved accuracy and generalization, crucial for advancing prosthetic devices and human-computer interfaces, and highlights the potential for broader applications in diverse sEMG signal types and neural network architectures.
Researchers introduce MFWD, a meticulously curated dataset capturing the growth of 28 weed species in maize and sorghum fields. This dataset, essential for computer vision in weed management, features high-resolution images, semantic and instance segmentation masks, and demonstrates promising results in multi-species classification, showcasing its potential for advancing automated weed detection and sustainable agriculture practices.
This research explores the performance of three computer vision approaches—CONTRACTIONWAVE, MUSCLEMOTION, and ViKiE—for evaluating contraction kinematics in cardioids and ventricular isolated single cells. The study leverages machine learning algorithms to assess the prediction performance of training datasets generated from each approach, demonstrating ViKiE's higher sensitivity and the overall effectiveness of machine learning in refining cardiac motion analysis.
Researchers from the UK, Germany, USA, and Canada unveiled a groundbreaking quantum-enhanced cybersecurity analytics framework using hybrid quantum machine learning algorithms. The novel approach leverages quantum computing to efficiently detect malicious domain names generated by domain generation algorithms (DGAs), showcasing superior speed, accuracy, and stability compared to traditional methods, marking a significant advancement in proactive cybersecurity analytics.
Researchers present a groundbreaking T-Max-Avg pooling layer for convolutional neural networks (CNNs), introducing adaptability in pooling operations. This innovative approach, demonstrated on benchmark datasets and transfer learning models, outperforms traditional pooling methods, showcasing its potential to enhance feature extraction and classification accuracy in diverse applications within the field of computer vision.
Researchers from Beijing University introduce Oracle-MNIST, a challenging dataset of 30,222 ancient Chinese characters, providing a realistic benchmark for machine learning (ML) algorithms. The Oracle-MNIST dataset, derived from oracle-bone inscriptions of the Shang Dynasty, surpasses traditional MNIST datasets in complexity, serving as a valuable tool not only for advancing ML research but also for enhancing the study of ancient literature, archaeology, and cultural heritage preservation.
This article introduces LC-Net, a novel convolutional neural network (CNN) model designed for precise leaf counting in rosette plants, addressing challenges in plant phenotyping. Leveraging SegNet for superior leaf segmentation, LC-Net incorporates both original and segmented leaf images, showcasing robustness and outperforming existing models in accurate leaf counting, offering a promising advancement for agricultural research and high-throughput plant breeding efforts.
This research pioneers the use of acoustic emission and artificial neural networks (ANN) to detect partial discharge (PD) in ceramic insulators, crucial for electrical system reliability. With a focus on defects caused by environmental factors, the study achieved a 96.03% recognition rate using ANNs, further validated by support vector machine (SVM) and K-nearest neighbor (KNN) algorithms, showcasing a significant advancement in real-time monitoring for electrical power network safety.
Researchers proposed a cost-effective solution to address the escalating issue of wildlife roadkill, focusing on Brazilian endangered species. Leveraging machine learning-based object detection, particularly You Only Look Once (YOLO)-based models, the study evaluated various architectures, introducing data augmentation and transfer learning to enhance model training with limited data.
Canadian researchers at Western University and the Vector Institute unveil a groundbreaking method employing deep neural networks to predict the memorability of face photographs. Outperforming previous models, this innovation demonstrates near-human consistency and versatility in handling different face shapes, with potential applications spanning social media, advertising, education, security, and entertainment.
Duke University researchers present a groundbreaking dataset of Above-Ground Storage Tanks (ASTs) using high-resolution aerial imagery from the USDA's National Agriculture Imagery Program. The dataset, with meticulous annotations and validation procedures, offers a valuable resource for diverse applications, including risk assessments, capacity estimations, and training object detection algorithms in the realm of remotely sensed imagery and ASTs.
This paper unveils FaceNet-MMAR, an advanced facial recognition model tailored for intelligent university libraries. By optimizing traditional FaceNet algorithms with innovative features, including mobilenet, mish activation, attention module, and receptive field module, the model showcases superior accuracy and efficiency, garnering high satisfaction rates from both teachers and students in real-world applications.
Researchers introduce machine learning-powered stretchable smart textile gloves, featuring embedded helical sensor yarns and IMUs. Overcoming the limitations of camera-based systems, these gloves provide accurate and washable tracking of complex hand movements, offering potential applications in robotics, sports training, healthcare, and human-computer interaction.
This paper delves into the transformative role of attention-based models, including transformers, graph attention networks, and generative pre-trained transformers, in revolutionizing drug development. From molecular screening to property prediction and molecular generation, these models offer precision and interpretability, promising accelerated advancements in pharmaceutical research. Despite challenges in data quality and interpretability, attention-based models are poised to reshape drug discovery, fostering breakthroughs in human health and pharmaceutical science.
Researchers from the University of Birmingham unveil a novel 3D edge detection technique using unsupervised learning and clustering. This method, offering automatic parameter selection, competitive performance, and robustness, proves invaluable across diverse applications, including robotics, augmented reality, medical imaging, automotive safety, architecture, and manufacturing, marking a significant leap in computer vision capabilities.
Researchers delve into the challenges of protein crystallography, discussing the hurdles in crystal production and structure refinement. In their article, they explore the transformative potential of deep learning and artificial neural networks, showcasing how these technologies can revolutionize various aspects of the protein crystallography workflow, from predicting crystallization propensity to refining protein structures. The study highlights the significant improvements in efficiency, accuracy, and automation brought about by deep learning, paving the way for enhanced drug development, biochemistry, and biotechnological applications.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.