Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions), and self-correction.
Researchers have developed a single-shot optical computing method that performs tensor operations at the speed of light, bypassing the bottlenecks of electronic hardware. By encoding data into light waves, the system executes complex AI operations in one passive optical pass, paving the way for ultra-efficient photonic AI chips.
Researchers have created the first Milky Way simulation that models more than 100 billion stars individually using a hybrid AI–HPC framework. This breakthrough achieves 100-times greater resolution and speed, enabling galaxy evolution studies that were previously impossible.
ETH Zurich researchers have built an extended-reality “co-pilot” and digital twin of the Lausanne Cathedral to support real-time conservation, integrating structural data, material science, and climate-driven deterioration models. The system enhances on-site decision-making and public engagement while improving long-term, climate-resilient restoration strategies.
Agentic AI systems can interpret intent and independently perform complex online tasks, offering major productivity benefits but also introducing new security, privacy, and compliance risks. Researchers at Saint Louis University are stress-testing these agents to identify vulnerabilities and strengthen safeguards before widespread deployment.
MIT researchers have developed SEAL, a self-adapting learning method that lets large language models permanently integrate new information by generating and evaluating their own “study sheets.” This approach boosts performance in question answering and skill learning, marking a step toward AI systems that learn continuously like humans.
This study shows that human hiring reviewers rapidly adopt racial biases displayed by AI recommendation systems, even when evaluating equally qualified candidates. The findings reveal that biased LLM outputs can directly shape human decision-making in high-stakes contexts such as hiring.
This perspective maps how AI can transform water treatment across technology, engineering, and industry by organising current advances into a structured tri-axis roadmap. It highlights how AI can drive smarter materials, microbial regulation, autonomous ecosystems, and next-generation industrial management.
Researchers at KIT have launched the WOW project to develop the first coupled AI-based world model capable of simulating interconnected Earth system processes. By linking specialized AI sub-models through latent spaces, the team aims to reveal hidden climate feedbacks and improve global-to-local risk assessment.
Researchers from Belgium and France tested Midjourney and DALL·E with highly specific visual prompts and found that, despite producing striking imagery, both systems routinely misinterpret basic linguistic instructions. Their study reveals that generative AI still struggles with negation, spatial relations, gaze direction, and temporal actions, highlighting significant limitations in how these models translate language into visual representations.
A new analysis from the University of Waterloo and Georgia Tech finds that AI’s contribution to national greenhouse gas emissions is negligible, despite rising electricity use. While local communities hosting data centers may feel significant strain, national-level climate impacts remain minimal and AI may ultimately accelerate clean-tech progress.
A Dartmouth study shows that medical students trust an AI teaching assistant far more when its answers are restricted to expert-curated course materials rather than the open internet. The research demonstrates how retrieval-anchored AI can safely deliver personalized, high-quality educational support at scale.
A WVU expert reports that ChatGPT has evolved from a demonstration tool to a mainstream workplace assistant, accelerating far faster than anticipated. While it excels at transforming information and streamlining workflows, its limitations in reliability, governance, and oversight mean it remains a tool for human augmentation rather than replacement.
Researchers from the University of Naples and the University of Wollongong have developed an AI-based method that boosts defect detection accuracy in wire arc additive manufacturing from 57% to 85.3%. The approach processes high-frequency welding data to identify anomalies in real time, reducing production costs and improving quality assurance.
Researchers at Stevens Institute of Technology have uncovered how large language models (LLMs) emulate aspects of human social reasoning through selective parameter activation and positional encoding. Their Nature Partner Journal on Artificial Intelligence study reveals that LLMs form rudimentary “beliefs” when reasoning about others’ perspectives, offering a pathway to more energy-efficient, human-like AI.
A new paper titled “A Survey on Omni-Modal Language Models” provides a comprehensive analysis of AI systems that integrate text, image, audio, and video understanding within a unified framework. Authored by researchers from Shandong Jianzhu University and Shandong University, the survey positions omni-modal language models (OMLMs) as key enablers on the path toward Artificial General Intelligence (AGI).
Researchers at KAIST have developed mathematically verified methods for converting C code into the memory-safe Rust language, addressing long-standing security vulnerabilities in critical software systems. Their world-first framework, featured as the Communications of the ACM November 2025 cover story, establishes a new standard for verified, secure software translation.
A study by researchers from the Universities of Reading, Greenwich, Leeds, and Lincoln found that just five minutes of targeted training can markedly improve people’s ability to spot AI-generated faces. After training, accuracy rose from below chance levels to 64% for super-recognisers and 51% for typical participants.
A Tufts University School of Medicine study led by Maria Blanco found that while two-thirds of faculty and students see potential in AI, fewer than 12% feel proficient. The findings are shaping Tufts’ roadmap for responsible AI integration into medical education through training, policy updates, and hands-on pilot projects.
A University of Washington study with 528 participants shows that people mirror biased large language model recommendations during hiring, even when applicants are equally qualified. Without source bias, selections were balanced, but with moderately to severely biased AI, human choices aligned closely with the AI, highlighting risks for fairness in recruitment.
The University of Pennsylvania’s Graduate School of Education (Penn GSE) has launched AI Auditing for High School, a first-of-its-kind curriculum that empowers students to identify and analyze algorithmic bias in AI systems through real-world audits, no coding required. Supported by national foundations and educators, the program aims to foster critical digital literacy and ethical AI awareness.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.