Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions), and self-correction.
Cranfield University researchers have developed a structured AI decision-making framework that improves disaster response accuracy by 39% compared to humans and provides 60% more consistent outcomes. This approach ensures faster, fairer, and more ethical deployment of AI in crises.
Scientists have unveiled DASFormer, a self-supervised AI model that transforms fiber-optic cables into powerful earthquake monitoring tools. By learning from unlabeled seismic data, it detects earthquakes as anomalies, outperforming 22 state-of-the-art models in real-world tests.
Social media platforms, especially TikTok, are fueling a new wave of tax scams in 2025, from fake credits to AI-powered voice cloning. The IRS has launched CASST, a special task force, to combat these sophisticated and dangerous fraud schemes.
Researchers at Stanford Medicine have developed CRISPR-GPT, an Artificial intelligence copilot that guides scientists through designing and troubleshooting CRISPR gene-editing experiments. Trained on 11 years of expert data, it accelerates experiment planning, predicts off-target effects, and lowers the barrier for newcomers to gene editing.
Researchers at the University of Surrey created a Artificial intelligence system that transcribes UK Supreme Court hearings with greater accuracy and links spoken arguments directly to written judgments. This custom courtroom AI cuts transcription errors, boosts efficiency, and makes legal proceedings more transparent and accessible.
An international team from the Max Planck Institute for Human Development, University of Duisburg-Essen, and Toulouse School of Economics found that people are far more likely to cheat when delegating tasks to Artificial intelligence, especially through vague goal-setting interfaces. Large language models like GPT-4, Claude 3.5, and Llama 3 also followed unethical prompts far more often than humans, while existing safeguards largely failed to prevent this.
Researchers at Cornell University have created Double Duty, a new Field-Programmable Gate Array (FPGA) chip architecture that allows logic blocks to perform arithmetic and logical operations simultaneously. This innovation cuts energy use, shrinks chip space by over 20%, and boosts performance by nearly 10% for AI tasks.
Researchers led by César A. Uribe at Rice University are using Artificial intelligence to analyze complex ecological data, from African mammal food webs to Colombian tropical forest soundscapes. Their methods reveal structural similarities between ecosystems and provide low-cost tools for biodiversity monitoring and conservation.
In a study published in the International Journal of Mathematical Education in Science and Technology, researchers found that ChatGPT tackled Plato’s ancient “doubling the square” puzzle not by recalling the well-known solution but by improvising new approaches, sometimes making human-like errors. The findings suggest that ChatGPT’s problem-solving mimicked learner-like behavior, blending retrieval and on-the-fly reasoning.
This study by Zhidong Cao, Xiangyu Zhang, and Daniel Dajun Zeng systematically examines the technical, cognitive, and reasoning attributes of Large language models (LLMs). It positions LLMs as a novel form of general intelligence that learns empirically from data, contrasting with traditional logic-driven systems.
Researchers from Bielefeld University and global partners propose a unified framework to explain why humans adapt so well to unfamiliar situations while AI systems often fail. The study explores how differing definitions and mechanisms of “generalization” create a critical divide between human and machine intelligence.
Dr Jan Burzlaff of Cornell University argues that human historians remain essential in the AI age, especially in preserving the emotional weight of traumatic events. His study shows that ChatGPT failed to convey the ethical complexity and raw suffering in Holocaust survivor testimonies.
Interactive features in AI chatbots and mobile apps increase user engagement by fostering a sense of playfulness, but also reduce users' privacy concerns. This can lead to inadvertent disclosure of personal data, highlighting the need for thoughtful design to prompt user awareness.
Researcher Angus Fletcher argues that artificial intelligence cannot replicate human adaptability, which relies on primal intelligence, intuition, imagination, emotion, and common sense. His book Primal Intelligence shows how “story thinking” empowers people to solve novel problems where data is scarce.
The University of Texas at San Antonio has launched the College of AI, Cyber, and Computing, uniting programs in artificial intelligence, cybersecurity, computing, and data science to serve over 5,000 students and drive workforce growth. Located in San Antonio’s urban core, the college will foster industry collaboration, innovation, and regional economic development.
Mostafa Khedr, Mahmoud Metawie, and Mohamed Marzouk from Cairo University have developed a framework that integrates Ground Penetrating Radar (GPR) with deep learning for non-destructive classification of rebar diameters in concrete. Their model using YOLO v8 achieved 97.2% accuracy, offering a safer and more efficient alternative for structural integrity assessments.
A new study by Huamin Wu, Guo Li, and Dmitry Ivanov establishes the first comprehensive theoretical framework for applying Generative Artificial Intelligence (GAI) in Supply Chain Management (SCM). It outlines GAI’s core capabilities, empowerment mechanisms, and challenges, while proposing a future research agenda to guide the creation of flexible, resilient, and sustainable supply chains.
This paper by Yphtach Lelkes and Neil Fasching is the first large-scale comparative analysis of seven major AI content moderation systems, evaluating their consistency in detecting hate speech across 1.3 million synthetic sentences. The findings reveal substantial inconsistencies, with some systems flagging identical content as harmful while others deem it acceptable, especially for certain demographic groups.
The paper argues that traditional educational research is failing to impact practice and policy due to entrenched methodological flaws, and that the rise of Artificial Intelligence demands a complete epistemological overhaul. It calls for ethical vigilance, methodological pluralism, and innovative, AI-informed approaches to studying education.
Researchers from Politecnico di Milano and international partners developed an in-situ training method for physical neural networks using light-based photonic chips, eliminating the need for digital models. This breakthrough enables faster, more energy-efficient AI computation directly on miniature silicon chips.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.