Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions), and self-correction.
Researchers explore the application of AI and ML in volatility forecasting, revealing their promise in improving accuracy and informing financial decisions. The review underscores the need for further exploration in explainable AI, uncertainty quantification, and alternative data sources to advance forecasting capabilities.
Despite expectations, incorrect AI-generated advice consistently led to performance decrements in personnel selection tasks, indicating overreliance. While both advice source and explainability influenced participants' reliance on inaccurate guidance, the effectiveness of visual explanations in preventing overreliance remained inconclusive, highlighting the complexity of human-AI interaction and the need for robust regulatory standards in HRM.
This study in Nature Medicine introduces MEDIC, an AI system designed to mitigate medication direction errors in pharmacies. Trained on expert-annotated data, MEDIC prioritizes precise communication of essential clinical components, reducing near-miss events and highlighting the potential of AI in enhancing pharmacy operations' accuracy and efficiency.
This study introduces an AI-driven approach to optimize tunnel boring machine (TBM) performance in soft ground conditions by predicting jack speed and torque settings. By synchronizing operator decisions with machine data and utilizing machine learning models, the research demonstrates significant improvements in TBM operational efficiency, paving the way for enhanced tunneling projects.
Researchers harness convolutional neural networks (CNNs) to recognize Shen embroidery, achieving 98.45% accuracy. By employing transfer learning and enhancing MobileNet V1 with spatial pyramid pooling, they provide crucial technical support for safeguarding this cultural art form.
Researchers introduce BS-SCRM, a novel method combining blockchain and swarm intelligence for secure clustering routing in WSNs, addressing energy efficiency and security challenges. Simulation results demonstrate superior performance in network lifetime, energy consumption, and security compared to existing methods, offering promise for diverse applications from IoT to healthcare.
Researchers introduced WindSeer, a groundbreaking approach utilizing deep neural networks for real-time, high-resolution wind predictions. By addressing the limitations of current weather models and leveraging convolutional neural network architecture, WindSeer offers accurate wind field predictions over diverse terrains without the need for extensive data, promising safer and more efficient operations in aviation and other fields.
Researchers in a recent Nature Communications paper introduced a novel autoencoding anomaly detection method utilizing deep decision trees (DT) deployed on field programmable gate arrays (FPGA) for real-time detection of rare phenomena at the Large Hadron Collider (LHC).
Exploring the financial challenges faced by expanding enterprises like the ZH group, researchers present a strategic financial control strategy incorporating intelligent algorithms. Through practical implementation and theoretical analysis, they highlight the efficacy of reverse neural networks and particle swarm optimization in enhancing decision-making and mitigating financial risks.
Research led by Oregon State University and the U.S. Forest Service indicates that artificial intelligence can effectively analyze acoustic data to monitor the elusive marbled murrelet, offering a promising tool for tracking this threatened seabird's population.
Researchers investigate jumbo drill rate prediction in underground mining using regression and machine learning methods, highlighting the effectiveness of support vector regression (SVR) with rock mass drillability index (RDi) for accuracy. ML outperforms regression, offering insights into drilling efficiency and rock mass characteristics.
Researchers in Germany introduce a Word2vec-based NLP method to automatically infer ICD-10 codes from German ophthalmology records, offering a solution to the challenges of manual coding and variable natural language. Results show high accuracy, with potential for streamlining healthcare record analysis.
Researchers investigated the utility of AI-driven analysis of body composition from CT scans to predict mortality in patients undergoing transcatheter aortic valve implantation (TAVI). Using the AutoMATiCA neural network, they extracted parameters such as skeletal muscle index (SMI) and adipose tissue density from CT scans of 866 patients.
Researchers integrated gradient quantization (GQ) into DenseNet architecture to improve image recognition (IR). By optimizing feature reuse and introducing GQ for parallel training, they achieved superior accuracy and accelerated training speed, overcoming communication bottlenecks.
Researchers conducted a noise audit on human-labeled benchmarks for machine commonsense reasoning (CSR), revealing significant levels of noise across different experimental conditions and datasets. The study emphasized the impact of noise on performance estimates of CSR systems, challenging the reliance on single ground truths in AI benchmarking practices and advocating for more nuanced evaluation methodologies.
This review explores the critical role of image-processing technologies in structural health monitoring (SHM) for civil infrastructures. It highlights the integration of artificial intelligence (AI) and machine learning (ML) to enhance SHM automation and accuracy. Various imaging modalities, including drones, thermography, LiDAR, and satellite imagery, are discussed for damage detection, crack identification, and deformation monitoring.
Researchers developed a hybrid classification model to autonomously extract crucial information from legal documents, achieving superior accuracy in comparison to baseline models. Leveraging GPT-3.5 and employing prompt strategies, the model demonstrated efficiency in extracting key information from murder verdicts, offering a promising tool to enhance investigative workflows and decision-making in criminal investigations.
The article explores electrode design for wearable skin devices, crucial for health monitoring and human-machine interfaces. It discusses properties like flexibility and conductivity and proposes methods like structure modification and hybrid materials. Applications range from health monitoring to therapy and human-machine interfaces, emphasizing the need for innovative electrode design to enhance device performance and integration with AI for smarter functionalities.
Researchers introduced OCTDL, an open-access dataset comprising over 2000 labeled OCT images of retinal diseases, including AMD, DME, and others. Utilizing high-resolution OCT scans obtained from an Optovue Avanti RTVue XR system, the dataset facilitated the development of deep learning models for disease classification. Validation with VGG16 and ResNet50 architectures demonstrated high performance, indicating OCTDL's potential for advancing automatic processing and early disease detection in ophthalmology.
Researchers developed a novel AI method, P-GAN, to improve the visualization of retinal pigment epithelial (RPE) cells using adaptive optics optical coherence tomography (AO-OCT). By transforming single noisy images into detailed representations of RPE cells, this approach enhances contrast and reduces imaging time, potentially revolutionizing ophthalmic diagnostics and personalized treatment strategies for retinal conditions.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.