Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions), and self-correction.
This study shows that human hiring reviewers rapidly adopt racial biases displayed by AI recommendation systems, even when evaluating equally qualified candidates. The findings reveal that biased LLM outputs can directly shape human decision-making in high-stakes contexts such as hiring.
This perspective maps how AI can transform water treatment across technology, engineering, and industry by organising current advances into a structured tri-axis roadmap. It highlights how AI can drive smarter materials, microbial regulation, autonomous ecosystems, and next-generation industrial management.
Researchers at KIT have launched the WOW project to develop the first coupled AI-based world model capable of simulating interconnected Earth system processes. By linking specialized AI sub-models through latent spaces, the team aims to reveal hidden climate feedbacks and improve global-to-local risk assessment.
Researchers from Belgium and France tested Midjourney and DALL·E with highly specific visual prompts and found that, despite producing striking imagery, both systems routinely misinterpret basic linguistic instructions. Their study reveals that generative AI still struggles with negation, spatial relations, gaze direction, and temporal actions, highlighting significant limitations in how these models translate language into visual representations.
A new analysis from the University of Waterloo and Georgia Tech finds that AI’s contribution to national greenhouse gas emissions is negligible, despite rising electricity use. While local communities hosting data centers may feel significant strain, national-level climate impacts remain minimal and AI may ultimately accelerate clean-tech progress.
A Dartmouth study shows that medical students trust an AI teaching assistant far more when its answers are restricted to expert-curated course materials rather than the open internet. The research demonstrates how retrieval-anchored AI can safely deliver personalized, high-quality educational support at scale.
A WVU expert reports that ChatGPT has evolved from a demonstration tool to a mainstream workplace assistant, accelerating far faster than anticipated. While it excels at transforming information and streamlining workflows, its limitations in reliability, governance, and oversight mean it remains a tool for human augmentation rather than replacement.
Researchers from the University of Naples and the University of Wollongong have developed an AI-based method that boosts defect detection accuracy in wire arc additive manufacturing from 57% to 85.3%. The approach processes high-frequency welding data to identify anomalies in real time, reducing production costs and improving quality assurance.
Researchers at Stevens Institute of Technology have uncovered how large language models (LLMs) emulate aspects of human social reasoning through selective parameter activation and positional encoding. Their Nature Partner Journal on Artificial Intelligence study reveals that LLMs form rudimentary “beliefs” when reasoning about others’ perspectives, offering a pathway to more energy-efficient, human-like AI.
A new paper titled “A Survey on Omni-Modal Language Models” provides a comprehensive analysis of AI systems that integrate text, image, audio, and video understanding within a unified framework. Authored by researchers from Shandong Jianzhu University and Shandong University, the survey positions omni-modal language models (OMLMs) as key enablers on the path toward Artificial General Intelligence (AGI).
Researchers at KAIST have developed mathematically verified methods for converting C code into the memory-safe Rust language, addressing long-standing security vulnerabilities in critical software systems. Their world-first framework, featured as the Communications of the ACM November 2025 cover story, establishes a new standard for verified, secure software translation.
A study by researchers from the Universities of Reading, Greenwich, Leeds, and Lincoln found that just five minutes of targeted training can markedly improve people’s ability to spot AI-generated faces. After training, accuracy rose from below chance levels to 64% for super-recognisers and 51% for typical participants.
A Tufts University School of Medicine study led by Maria Blanco found that while two-thirds of faculty and students see potential in AI, fewer than 12% feel proficient. The findings are shaping Tufts’ roadmap for responsible AI integration into medical education through training, policy updates, and hands-on pilot projects.
A University of Washington study with 528 participants shows that people mirror biased large language model recommendations during hiring, even when applicants are equally qualified. Without source bias, selections were balanced, but with moderately to severely biased AI, human choices aligned closely with the AI, highlighting risks for fairness in recruitment.
The University of Pennsylvania’s Graduate School of Education (Penn GSE) has launched AI Auditing for High School, a first-of-its-kind curriculum that empowers students to identify and analyze algorithmic bias in AI systems through real-world audits, no coding required. Supported by national foundations and educators, the program aims to foster critical digital literacy and ethical AI awareness.
Researchers at Chung-Ang University in South Korea have developed DiffectNet, an AI-powered diffusion model that reconstructs hidden internal defects in structures with high precision, overcoming the physical limits of traditional non-destructive testing. The breakthrough enables real-time defect imaging for critical sectors such as energy, aerospace, and semiconductors.
Nanyang Technological University, Singapore (NTU) and Zero Gravity (0G) have launched a S$5 million joint research hub to advance blockchain-based AI technologies that promote transparency, accountability, and decentralisation. The four-year initiative will pioneer open AI systems, secure marketplaces, and proof-of-useful-work frameworks.
Researchers at Japan’s National Institute for Materials Science (NIMS) have used explainable AI (XAI) to uncover how chemical sensors distinguish between odorant molecules, revealing the molecular interactions that drive artificial smell recognition. The findings pave the way for smarter sensor design and deeper insights into human olfaction.
Researchers at Tokyo University of Science have developed an AI-based method to automatically analyze X-ray absorption spectroscopy (XAS) data, achieving precise and objective identification of material structures and defects. Using machine learning, particularly the UMAP algorithm, the approach enables rapid, scalable, and noise-resistant characterization of boron-based materials.
Cornell researchers have quantified AI’s rapidly growing environmental footprint, projecting that U.S. data centers could emit up to 44 million tons of CO₂ and consume over a billion cubic meters of water annually by 2030. Their Nature Sustainability study also outlines strategies that could cut these impacts by up to 86%.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.