Machine learning is a subfield of artificial intelligence that focuses on developing algorithms and models capable of automatically learning and making predictions or decisions from data without being explicitly programmed. It involves training models on labeled datasets to recognize patterns and make accurate predictions or classifications in new, unseen data.
Researchers explored the integration of Deep Neural Operator Network (DeepONet) as a robust surrogate modeling method for digital twin (DT) technology in nuclear energy systems. DeepONet's unique architecture, trained with various operational conditions, showcased unparalleled accuracy and speed, positioning it as a promising algorithm for real-time predictions in complex particle transport problems.
Researchers present a groundbreaking Federated Learning (FL) model for passenger demand forecasting in Smart Cities, focusing on the context of Autonomous Taxis (ATs). The FL approach ensures data privacy by allowing ATs in different regions to collaboratively enhance their demand forecasting models without directly sharing sensitive passenger information. The proposed model outperforms traditional methods, showcasing superior accuracy while addressing privacy concerns in the era of smart and autonomous transportation systems.
Researchers from multiple countries introduced a groundbreaking method using machine learning (ML) models to predict the effluent soluble chemical oxygen demand (SCOD) in a two-stage anaerobic onsite sanitation system. Outperforming conventional models, the ML approach, led by the artificial neural network (ANN), not only enhances prediction accuracy but also offers simplicity, speed, and reliability in optimizing and controlling wastewater treatment processes, marking a significant leap in sustainable sanitation technology.
This research delves into the functional role of the hippocampal subfield CA3, proposing it as an auto-associative network for encoding memories. The study unveils dual input pathways from the entorhinal cortex and dentate gyrus, presenting a CA3 model resembling a Hopfield-like network. The comprehensive approach combines computational modeling, data analysis, and machine learning to investigate encoding and retrieval processes, shedding light on memory-related functions and computational advantages in complex tasks.
This research explores the performance of three computer vision approaches—CONTRACTIONWAVE, MUSCLEMOTION, and ViKiE—for evaluating contraction kinematics in cardioids and ventricular isolated single cells. The study leverages machine learning algorithms to assess the prediction performance of training datasets generated from each approach, demonstrating ViKiE's higher sensitivity and the overall effectiveness of machine learning in refining cardiac motion analysis.
Researchers from the University of California and the California Institute of Technology present a groundbreaking electronic skin, CARES, featured in Nature Electronics. This wearable seamlessly monitors multiple vital signs and sweat biomarkers related to stress, providing continuous and accurate data during various activities. The study demonstrates its potential in stress assessment and management, offering a promising tool for diverse applications in healthcare, sports, the military, education, and the workplace.
The Mobilise-D consortium unveils a groundbreaking protocol using IMU-based wearables for real-world mobility monitoring across clinical cohorts. Despite achieving accurate walking speed estimates, the study emphasizes context-dependent variations and charts a visionary future, envisioning wearables as integral in ubiquitous remote patient monitoring and personalized interventions, revolutionizing healthcare.
Researchers dissected the intricate relationship between meta-level and statistical features of tabular datasets, unveiling the impactful role of kurtosis, meta-level ratio, and statistical mean on non-tree-based ML algorithms. This study, based on 200 diverse datasets, provides essential insights for optimizing algorithm selection and understanding the nuanced interplay between dataset characteristics and ML performance.
Scientists introduce an innovative machine-learning model adept at predicting the presence of the tularemia-causing bacterium, Francisella tularensis, in soil samples. Utilizing a two-stage feature-ranking process and hyperparameter optimization, the model showcased high accuracy, offering a cost-effective and rapid tool for detecting this potentially fatal pathogen with broader applications in soil-borne pathogen identification.
Contrary to common concerns, a study published in Nature unveils that the introduction of artificial intelligence, particularly industrial robots, has positively impacted employment in China's manufacturing sector from 2006 to 2020. The research challenges pessimistic views, highlighting increased job creation, enhanced labor productivity, and refined division of labor, with a significant positive effect on female employment, offering valuable insights for global AI employment dynamics.
Researchers from the UK, Germany, USA, and Canada unveiled a groundbreaking quantum-enhanced cybersecurity analytics framework using hybrid quantum machine learning algorithms. The novel approach leverages quantum computing to efficiently detect malicious domain names generated by domain generation algorithms (DGAs), showcasing superior speed, accuracy, and stability compared to traditional methods, marking a significant advancement in proactive cybersecurity analytics.
Researchers present a groundbreaking T-Max-Avg pooling layer for convolutional neural networks (CNNs), introducing adaptability in pooling operations. This innovative approach, demonstrated on benchmark datasets and transfer learning models, outperforms traditional pooling methods, showcasing its potential to enhance feature extraction and classification accuracy in diverse applications within the field of computer vision.
Researchers from Iran and Turkey showcase the power of machine learning, employing artificial neural networks (ANN) and support vector regression (SVR) to analyze the optical properties of zinc titanate nanocomposite. The study compares these machine learning techniques with the conventional nonlinear regression method, revealing superior accuracy and efficiency in assessing spectroscopic ellipsometry data, offering insights into the nanocomposite's potential applications in diverse fields.
Researchers from Beijing University introduce Oracle-MNIST, a challenging dataset of 30,222 ancient Chinese characters, providing a realistic benchmark for machine learning (ML) algorithms. The Oracle-MNIST dataset, derived from oracle-bone inscriptions of the Shang Dynasty, surpasses traditional MNIST datasets in complexity, serving as a valuable tool not only for advancing ML research but also for enhancing the study of ancient literature, archaeology, and cultural heritage preservation.
Researchers propose a groundbreaking data-driven approach, employing advanced machine learning models like LSTM and statistical models, to predict the All Indian Summer Monsoon Rainfall (AISMR) in 2023. Outperforming conventional physical models, the LSTM model, incorporating Indian Ocean Dipole (IOD) and El Niño-Southern Oscillation (ENSO) data, demonstrates a remarkable 61.9% forecast success rate, highlighting the potential for transitioning from traditional methods to more accurate and reliable data-driven forecasting systems.
Researchers employ advanced intelligent systems to analyze extensive traffic data on northern Iranian suburban roads, revolutionizing traffic state prediction. By integrating principal component analysis, genetic algorithms, and cyclic features, coupled with machine learning models like LSTM and SVM, the study achieves a significant boost in prediction accuracy and efficiency, offering valuable insights for optimizing transportation management and paving the way for advancements in traffic prediction methodologies.
Researchers unveil LGN, a groundbreaking graph neural network (GNN)-based fusion model, addressing the limitations of existing protein-ligand binding affinity prediction methods. The study demonstrates the model's superiority, emphasizing the importance of incorporating ligand information and evaluating stability and performance for advancing drug discovery in computational biology.
This study explores the acceptance of chatbots among insurance policyholders. Using the Technology Acceptance Model (TAM), the research emphasizes the crucial role of trust in shaping attitudes and behavioral intentions toward chatbots, providing valuable insights for the insurance industry to enhance customer acceptance and effective implementation of conversational agents.
Researchers proposed a cost-effective solution to address the escalating issue of wildlife roadkill, focusing on Brazilian endangered species. Leveraging machine learning-based object detection, particularly You Only Look Once (YOLO)-based models, the study evaluated various architectures, introducing data augmentation and transfer learning to enhance model training with limited data.
USA researchers delve into the intersection of machine learning and climate-induced health impacts. The review identifies the potential of ML algorithms in predicting health outcomes from extreme weather events, emphasizing feasibility, promising results, and ethical considerations, paving the way for proactive healthcare and policy decisions in the face of climate change.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.