K-Nearest Neighbor (KNN) is a simple, non-parametric machine learning algorithm used for classification and regression tasks. It determines the class or value of a data point by considering the majority class or average value of its k nearest neighbors in the feature space.
This study introduces an AI-driven approach to optimize tunnel boring machine (TBM) performance in soft ground conditions by predicting jack speed and torque settings. By synchronizing operator decisions with machine data and utilizing machine learning models, the research demonstrates significant improvements in TBM operational efficiency, paving the way for enhanced tunneling projects.
Researchers present an innovative ML-based approach, leveraging GANs for synthetic data generation and LSTM for temporal patterns, to tackle data scarcity and temporal dependencies in predictive maintenance. Despite challenges, their architecture achieves promising results, underlining AI's potential in enhancing maintenance practices.
Researchers from Xinjiang University introduced a groundbreaking approach, BFDGE, for detecting bearing faults using ensemble learning and graph neural networks. This method, demonstrated on public datasets, showcases superior accuracy and robustness, paving the way for enhanced safety and efficiency in various industries reliant on rotating machinery.
In a recent paper published in Scientific Reports, researchers addressed the challenges of accurately diagnosing migraine headaches using machine learning (ML) techniques. Leveraging state-of-the-art ML algorithms such as support vector machine (SVM), k-nearest neighbors (KNN), random forest (RF), decision tree (DST), and deep neural networks (DNN), the study demonstrated remarkable effectiveness in classifying seven different types of migraines.
Researchers dissected the intricate relationship between meta-level and statistical features of tabular datasets, unveiling the impactful role of kurtosis, meta-level ratio, and statistical mean on non-tree-based ML algorithms. This study, based on 200 diverse datasets, provides essential insights for optimizing algorithm selection and understanding the nuanced interplay between dataset characteristics and ML performance.
Researchers from the UK, Germany, USA, and Canada unveiled a groundbreaking quantum-enhanced cybersecurity analytics framework using hybrid quantum machine learning algorithms. The novel approach leverages quantum computing to efficiently detect malicious domain names generated by domain generation algorithms (DGAs), showcasing superior speed, accuracy, and stability compared to traditional methods, marking a significant advancement in proactive cybersecurity analytics.
Researchers from Beijing University introduce Oracle-MNIST, a challenging dataset of 30,222 ancient Chinese characters, providing a realistic benchmark for machine learning (ML) algorithms. The Oracle-MNIST dataset, derived from oracle-bone inscriptions of the Shang Dynasty, surpasses traditional MNIST datasets in complexity, serving as a valuable tool not only for advancing ML research but also for enhancing the study of ancient literature, archaeology, and cultural heritage preservation.
Researchers from the University of Tuscia, Italy, introduced a machine learning (ML)-based classification model to offer tailored support tools and learning strategies for university students with dyslexia. The model, trained on a self-evaluation questionnaire from over 1200 dyslexic students, demonstrated high accuracy in predicting effective methodologies, providing a personalized approach to enhance learning outcomes and well-being. The study emphasizes the potential applications in education, psychology, and tool/strategy development, encouraging future research directions and student involvement in the design process.
This research introduces FakeStack, a powerful deep learning model combining BERT embeddings, Convolutional Neural Network (CNN), and Long Short-Term Memory (LSTM) for accurate fake news detection. Trained on diverse datasets, FakeStack outperforms benchmarks and alternative models across multiple metrics, demonstrating its efficacy in combating false news impact on public opinion.
This research delves into the synergy of Artificial Intelligence (AI) and Internet of Things (IoT) security. The study evaluates and compares various AI algorithms, including machine learning (ML) and deep learning (DL), for classifying and detecting IoT attacks. It introduces a novel taxonomy of AI methodologies for IoT security and identifies LSTM as the top-performing algorithm, emphasizing its potential applications in diverse fields.
Researchers explored the application of artificial intelligence (AI), specifically long short-term memory (LSTM) and artificial neural networks (ANN), in assessing and predicting surface water quality. The study, conducted on the Ashwini River in Himachal Pradesh, India, showcased the effectiveness of LSTM models in accurate water quality prediction, emphasizing the potential of AI in resource management and environmental protection
This review article discusses the evolution of machine learning applications in weather and climate forecasting. It outlines the historical transition from statistical methods to physical models and the recent emergence of machine learning techniques. The article categorizes machine learning applications in climate prediction, covering both short-term weather forecasts and medium-to-long-term climate predictions.
Researchers have introduced an innovative IoT-based system for recognizing negative emotions, such as disgust, fear, and sadness, using multimodal biosignal data from wearable devices. This system combines EEG signals and physiological data from a smart band, processed through machine learning, to achieve high accuracy in emotion recognition.
This research paper discusses the application of machine learning algorithms to predict the Water Quality Index (WQI) in groundwater in Sakrand, Pakistan. The study collected data samples, applied various classifiers, and found that the linear Support Vector Machine (SVM) model demonstrated the highest prediction accuracy for both raw and normalized data, with potential applications in assessing groundwater quality for various purposes, including drinking and irrigation.
This study delves into the intricate relationship between human emotions and body motions, using a controlled lab experiment to simulate real-world interactions. Researchers successfully induced emotions in participants and employed machine learning models to classify emotions based on a comprehensive range of motion parameters, shedding light on the potential for emotion recognition through naturalistic body expressions.
Researchers have harnessed the power of artificial intelligence to predict chloride resistance in concrete compositions, a key factor in enhancing structural durability and preventing corrosion. By leveraging machine learning techniques, they created a reliable model that can forecast chloride migration coefficients, reducing the need for labor-intensive and time-consuming experimentation, and paving the way for more cost-effective and sustainable construction practices.
This study delves into the world of radiomics, evaluating the impact of different methods and algorithms on model performance across ten diverse datasets. The research highlights the key factors influencing radiomic performance and provides insights into optimal combinations of algorithms for stable results, emphasizing the importance of careful modeling decisions in this field.
This review explores how fuzzy logic, neural networks, and optimization algorithms hold immense promise in predicting, diagnosing, and detecting CVD. By handling complex medical uncertainties and delivering accurate and affordable insights, soft computing has the potential to transform cardiovascular care, especially in resource-limited settings, and significantly improve clinical outcomes.
Researchers explore the integration of AI and psychometric testing to measure emotional intelligence (EI) using eye-tracking technology. By employing machine learning models, the study assesses the accuracy of EI measurements and uncovers predictive eye-tracking features. The findings reveal the potential of AI to achieve high accuracy with minimal eye-tracking data, paving the way for improved measurement quality and practical applications in fields like management and education.
Researchers highlight the role of solid biofuels and IoT technologies in smart city development. They introduce an IoT-based method, Solid Biofuel Classification using Sailfish Optimizer Hybrid Deep Learning (SBFC-SFOHDL), which leverages deep learning and optimization techniques for accurate biofuel classification.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.