Artificial Intelligence (AI) refers to the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions), and self-correction.
An international team from the Max Planck Institute for Human Development, University of Duisburg-Essen, and Toulouse School of Economics found that people are far more likely to cheat when delegating tasks to Artificial intelligence, especially through vague goal-setting interfaces. Large language models like GPT-4, Claude 3.5, and Llama 3 also followed unethical prompts far more often than humans, while existing safeguards largely failed to prevent this.
Researchers at Cornell University have created Double Duty, a new Field-Programmable Gate Array (FPGA) chip architecture that allows logic blocks to perform arithmetic and logical operations simultaneously. This innovation cuts energy use, shrinks chip space by over 20%, and boosts performance by nearly 10% for AI tasks.
Researchers led by César A. Uribe at Rice University are using Artificial intelligence to analyze complex ecological data, from African mammal food webs to Colombian tropical forest soundscapes. Their methods reveal structural similarities between ecosystems and provide low-cost tools for biodiversity monitoring and conservation.
In a study published in the International Journal of Mathematical Education in Science and Technology, researchers found that ChatGPT tackled Plato’s ancient “doubling the square” puzzle not by recalling the well-known solution but by improvising new approaches, sometimes making human-like errors. The findings suggest that ChatGPT’s problem-solving mimicked learner-like behavior, blending retrieval and on-the-fly reasoning.
This study by Zhidong Cao, Xiangyu Zhang, and Daniel Dajun Zeng systematically examines the technical, cognitive, and reasoning attributes of Large language models (LLMs). It positions LLMs as a novel form of general intelligence that learns empirically from data, contrasting with traditional logic-driven systems.
Researchers from Bielefeld University and global partners propose a unified framework to explain why humans adapt so well to unfamiliar situations while AI systems often fail. The study explores how differing definitions and mechanisms of “generalization” create a critical divide between human and machine intelligence.
Dr Jan Burzlaff of Cornell University argues that human historians remain essential in the AI age, especially in preserving the emotional weight of traumatic events. His study shows that ChatGPT failed to convey the ethical complexity and raw suffering in Holocaust survivor testimonies.
Interactive features in AI chatbots and mobile apps increase user engagement by fostering a sense of playfulness, but also reduce users' privacy concerns. This can lead to inadvertent disclosure of personal data, highlighting the need for thoughtful design to prompt user awareness.
Researcher Angus Fletcher argues that artificial intelligence cannot replicate human adaptability, which relies on primal intelligence, intuition, imagination, emotion, and common sense. His book Primal Intelligence shows how “story thinking” empowers people to solve novel problems where data is scarce.
The University of Texas at San Antonio has launched the College of AI, Cyber, and Computing, uniting programs in artificial intelligence, cybersecurity, computing, and data science to serve over 5,000 students and drive workforce growth. Located in San Antonio’s urban core, the college will foster industry collaboration, innovation, and regional economic development.
Mostafa Khedr, Mahmoud Metawie, and Mohamed Marzouk from Cairo University have developed a framework that integrates Ground Penetrating Radar (GPR) with deep learning for non-destructive classification of rebar diameters in concrete. Their model using YOLO v8 achieved 97.2% accuracy, offering a safer and more efficient alternative for structural integrity assessments.
A new study by Huamin Wu, Guo Li, and Dmitry Ivanov establishes the first comprehensive theoretical framework for applying Generative Artificial Intelligence (GAI) in Supply Chain Management (SCM). It outlines GAI’s core capabilities, empowerment mechanisms, and challenges, while proposing a future research agenda to guide the creation of flexible, resilient, and sustainable supply chains.
This paper by Yphtach Lelkes and Neil Fasching is the first large-scale comparative analysis of seven major AI content moderation systems, evaluating their consistency in detecting hate speech across 1.3 million synthetic sentences. The findings reveal substantial inconsistencies, with some systems flagging identical content as harmful while others deem it acceptable, especially for certain demographic groups.
The paper argues that traditional educational research is failing to impact practice and policy due to entrenched methodological flaws, and that the rise of Artificial Intelligence demands a complete epistemological overhaul. It calls for ethical vigilance, methodological pluralism, and innovative, AI-informed approaches to studying education.
Researchers from Politecnico di Milano and international partners developed an in-situ training method for physical neural networks using light-based photonic chips, eliminating the need for digital models. This breakthrough enables faster, more energy-efficient AI computation directly on miniature silicon chips.
Researchers at University of Florida have created a silicon chip that uses laser light and microscopic Fresnel lenses to perform convolution operations, a power-intensive step in AI. This optical approach greatly reduces energy use while maintaining near 98% accuracy in machine learning tasks.
Researchers at Ruhr University Bochum developed a quartet-based deep learning method that embeds phylogenetic tree structures into neural networks. Tested on bacterial 16S rRNA data, it enables AI to extract features aligned with evolutionary relationships rather than random patterns.
A new AI-powered method combines atomic force microscopy with deep learning to rapidly and accurately identify macrophage polarization states without labels or dyes. This approach offers real-time, high-throughput insights into immune function with diagnostic potential across diseases.
Kennesaw State University researcher Bobin Deng is developing an AI method that enables powerful models to run directly on personal devices without internet access. By using activation sparsity to predict which data will be needed, his approach reduces energy, memory use, and reliance on cloud computing.
AI-powered digital streamers are gaining traction in e-commerce livestreams, but they currently underperform compared to human hosts. Real-time interaction and human-like behavior significantly enhance their effectiveness, pointing to hybrid human-AI solutions as the future.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.