A neural network is a computational model inspired by the structure and function of the human brain. It consists of interconnected artificial neurons that process and transmit information, enabling machine learning tasks such as pattern recognition, classification, and regression by learning from labeled data.
Researchers introduce a groundbreaking Optical Tomography method employing Multi-Core Fiber-Optic Cell Rotation (MCF-OCR). This innovative system overcomes limitations in traditional optical tomography by utilizing an AI-driven reconstruction workflow, demonstrating superior accuracy in 3D reconstructions of live cells. The MCF-OCR system offers precise control over cell rotation, while the autonomous reconstruction workflow, powered by computer vision technologies, significantly enhances efficiency and accuracy in capturing detailed cellular morphology.
Researchers address critical forest cover shortage, utilizing Sentinel-2 satellite imagery and sophisticated algorithms. Artificial Neural Networks (ANN) and Random Forest (RF) algorithms showcase exceptional accuracy, achieving 97.75% and 96.98% overall accuracy, respectively, highlighting their potential in precise land cover classification. The study's success recommends integrating hyperspectral satellite imagery for enhanced accuracy and explores the possibilities of deep learning algorithms for further advancements in forest cover assessment.
This work presents a novel Graph Neural Network (GNN) method for swiftly identifying critical road segments post-disaster, aiding efficient recovery and resilience planning. Overcoming computational challenges, the GNN-based edge ranking framework proves effective in large-scale networks, offering accuracy and adaptability. This approach showcases versatility, enabling real-time analysis and facilitating proactive measures for reinforcing critical infrastructure against future disruptions.
Researchers present ML-SEISMIC, a groundbreaking physics-informed neural network (PINN) named ML-SEISMIC, revolutionizing stress field estimation in Australia. The method autonomously integrates sparse stress orientation data with an elastic model, showcasing its potential for comprehensive stress and displacement field predictions, with implications for geological applications, including earthquake modeling, energy production, and environmental assessments.
Researchers focus on improving pedestrian safety within intelligent cities using AI, specifically support vector machine (SVM). Leveraging machine learning and authentic pedestrian behavior data, the SVM model outperforms others in predicting crossing probabilities and speeds, demonstrating its potential for enhancing road traffic safety and integrating with intelligent traffic simulations. The study emphasizes the significance of SVM in accurately predicting real-time pedestrian behaviors, contributing to refined decision models for safer road designs.
This study introduces an AI-based system predicting gait quality progression. Leveraging kinematic data from 734 patients with gait disorders, the researchers explore signal and image-based approaches, achieving promising results with neural networks. The study marks a pioneering application of AI in predicting gait variations, offering insights into future advancements in this critical domain of healthcare.
This study proposes an innovative approach to enhance road safety by introducing a CNN-LSTM model for driver sleepiness detection. Combining facial movement analysis and deep learning, the model outperforms existing methods, achieving over 98% accuracy in real-world scenarios, paving the way for effective implementation in smart vehicles to proactively prevent accidents caused by driver fatigue.
Researchers present an AI platform, Stochastic OnsagerNet (S-OnsagerNet), that autonomously learns clear thermodynamic descriptions of intricate non-equilibrium systems from microscopic trajectory observations. This innovative approach, rooted in the generalized Onsager principle, enables the interpretation of complex phenomena, showcasing its effectiveness in understanding polymer stretching dynamics and demonstrating potential applications in diverse dissipative processes like glassy systems and protein folding.
This paper unveils the Elderly and Visually Impaired Human Activity Monitoring (EV-HAM) system, a pioneering solution utilizing artificial intelligence, digital twins, and Wi-Sense for accurate activity recognition. Employing Deep Hybrid Convolutional Neural Networks on Wi-Fi Channel State Information data, the system achieves a remarkable 99% accuracy in identifying micro-Doppler fingerprints of activities, presenting a revolutionary advancement in elderly and visually impaired care through continuous monitoring and crisis intervention.
Researchers unveil a pioneering approach using a convolutional neural network (CNN) to analyze extreme precipitation patterns' link to climate shifts. This CNN-based method, trained with data from 10,000 precipitation stations, overcomes limitations of traditional analyses, providing high-resolution maps and nuanced insights into the sensitivity of extreme precipitation to climate change for North America, Europe, Australia, and New Zealand.
Researchers advocate for employing artificial neural networks (ANNs) as "artificial physics engines" to compute complex inverse dynamics in human arm and hand movements. The study showcases ANNs' potential in enhancing assistive technologies, such as prosthetics and exoskeletons, offering a detailed, customizable, and reactive approach for more natural movement in individuals with impaired motor function.
Researchers unveil RVTALL, a groundbreaking multimodal dataset for contactless speech recognition. Integrating data from UWB and mmWave radars, depth cameras, lasers, and audio-visual sources, the dataset aids in exploring non-invasive speech analysis. The study demonstrates applications in silent speech recognition, speech enhancement, analysis, and synthesis, though it acknowledges limitations in sample size and diversity. The dataset stands as a robust tool for advancing research in speech-related technologies.
The RefCap model pioneers visual-linguistic multi-modality in image captioning, incorporating user-specified object keywords. Comprising Visual Grounding, Referent Object Selection, and Image Captioning modules, the model demonstrates efficacy in producing tailored captions aligned with users' specific interests, validated across datasets like RefCOCO and COCO captioning.
Researchers proposed a hybrid optimization approach, combining Artificial Neural Network (ANN) and Genetic Algorithm (GA), to enhance plastic injection molding. Addressing quality, production efficiency, and sustainability, the method demonstrated effectiveness in achieving global multi-objective optimization, providing a valuable tool for smart, sustainable, and economically efficient production processes.
Researchers from Nanjing University of Science and Technology present a novel scheme, Spatial Variation-Dependent Verification (SVV), utilizing convolutional neural networks and textural features for handwriting identification and verification. The scheme outperforms existing methods, achieving 95.587% accuracy, providing a robust solution for secure handwriting recognition and authentication in diverse applications, including security, forensics, banking, education, and healthcare.
Researchers present a novel approach, the Dictionary-Based Matching Graph Network (DBGN), for Biomedical Named Entity Recognition (BioNER). By incorporating biomedical dictionaries and utilizing BiLSTM and BioBERT encoders, DBGN outperforms existing models across various biomedical datasets, demonstrating significant advancements in entity recognition with improved efficiency.
This article delves into bolstering Internet of Things (IoT) security, specifically countering botnet attacks that jeopardize IoT ecosystems. Employing tree-based algorithms, including Decision Trees, Random Forest, and boosting techniques, the researchers conduct a thorough empirical analysis, highlighting Random Forest's standout multi-class classification accuracy and superior computational efficiency.
This paper introduces SCANN, an interpretable deep learning architecture with attention mechanisms tailored for comprehending material structures and predicting properties. Utilizing iterative learning and global attention scores, SCANN excels in capturing complex structure-property relationships, outperforming traditional methods. The study demonstrates SCANN's robust predictive capabilities across diverse datasets, emphasizing its interpretative capacity to unveil how material properties correlate with specific structural features, thereby guiding future advancements in material design and discovery.
The article presents a groundbreaking approach for identifying sandflies, crucial vectors for various pathogens, using Wing Interferential Patterns (WIPs) and deep learning. Traditional methods are laborious, and this non-invasive technique offers efficient sandfly taxonomy, especially under field conditions. The study demonstrates exceptional accuracy in taxonomic classification at various levels, showcasing the potential of WIPs and deep learning for advancing entomological surveys in medical vector identification.
This article introduces an AI-based solution for real-time detection of safety helmets and face masks on municipal construction sites. The enhanced YOLOv5s model, leveraging ShuffleNetv2 and ECA mechanisms, demonstrates a 4.3% increase in mean Average Precision with significant resource savings. The study emphasizes the potential of AI-powered systems to improve worker safety, reduce accidents, and enhance efficiency in urban construction projects.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.