AI is employed in education to personalize learning experiences, provide adaptive feedback, and automate administrative tasks. It utilizes machine learning algorithms, natural language processing, and data analytics to enhance student engagement, optimize teaching methods, and streamline educational processes, leading to more effective and personalized education.
A $799,000 NSF-funded project led by FAMU-FSU researchers is developing robotic coaching systems that adapt to human learning, using unicycle training as a model to accelerate motor skill acquisition and transform physical therapy for patients with mobility impairments.
Korean researchers at ETRI have unveiled next-generation remote collaboration technology that enables lifelike face-to-face interaction in XR, including real-time eye contact and realistic handshakes through exoskeleton haptic gloves and AI-powered digital humans.
Chulalongkorn University’s Social Research Institute hosted an international lecture on September 5, 2025, where Dr. Muthu Kumar Chandrasekaran highlighted how AI can transform higher education through personalized learning, research innovation, and global collaboration.
Join MIT's eight-week program on Applied Agentic AI to learn how autonomous agents can redefine business operations and drive organizational transformation.
Researchers in Korea have developed MoBluRF, a two-stage deblurring framework that enables sharp 3D scene reconstruction from blurry monocular videos. This innovation advances NeRF technology by separating camera and object motion, achieving crisp novel view synthesis even from casual handheld footage.
In a study published in the International Journal of Mathematical Education in Science and Technology, researchers found that ChatGPT tackled Plato’s ancient “doubling the square” puzzle not by recalling the well-known solution but by improvising new approaches, sometimes making human-like errors. The findings suggest that ChatGPT’s problem-solving mimicked learner-like behavior, blending retrieval and on-the-fly reasoning.
Dr Jan Burzlaff of Cornell University argues that human historians remain essential in the AI age, especially in preserving the emotional weight of traumatic events. His study shows that ChatGPT failed to convey the ethical complexity and raw suffering in Holocaust survivor testimonies.
A Wake Forest University researcher is developing robust multi-agent reinforcement learning (MARL) algorithms to ensure AI systems can function safely even when individual agents fail. Backed by an NSF CAREER award, her work will enhance AI reliability in high-stakes settings like healthcare, disaster response, and environmental protection.
The University of Texas at San Antonio has launched the College of AI, Cyber, and Computing, uniting programs in artificial intelligence, cybersecurity, computing, and data science to serve over 5,000 students and drive workforce growth. Located in San Antonio’s urban core, the college will foster industry collaboration, innovation, and regional economic development.
Shiyan Jiang, a new faculty member at Penn GSE, is pioneering inclusive approaches to AI education by embedding AI across diverse subjects and empowering all teachers, not just computer science specialists, to be AI educators. Her work focuses on building both teacher and student AI identities to make AI education accessible, meaningful, and identity-driven.
This paper by Yphtach Lelkes and Neil Fasching is the first large-scale comparative analysis of seven major AI content moderation systems, evaluating their consistency in detecting hate speech across 1.3 million synthetic sentences. The findings reveal substantial inconsistencies, with some systems flagging identical content as harmful while others deem it acceptable, especially for certain demographic groups.
The paper argues that traditional educational research is failing to impact practice and policy due to entrenched methodological flaws, and that the rise of Artificial Intelligence demands a complete epistemological overhaul. It calls for ethical vigilance, methodological pluralism, and innovative, AI-informed approaches to studying education.
For the first time, scientists have used AI and satellite imagery to count the Great Wildebeest Migration, finding fewer than 600,000 animals, less than half the long-standing estimate of 1.3 million. The results highlight new conservation challenges and a breakthrough in wildlife monitoring.
A new AI-powered Virtual Research Assistant has cut astronomers’ daily workload by 85% by automatically filtering millions of sky alerts and identifying genuine supernova signals. In its first year, it reduced human checks while retaining over 99.9% of real events, paving the way for the data avalanche from the Vera Rubin Observatory.
UC Riverside researchers have created a certified unlearning method that removes sensitive or copyrighted data from AI models without needing access to the original training data. Their framework uses surrogate datasets and calibrated noise to provide provable guarantees while maintaining model performance.
Students in a university programming course reported using AI tools like ChatGPT mainly for debugging and understanding code. However, frequent AI use was negatively correlated with academic performance, raising concerns about over-reliance and unguided integration into learning.
Researchers developed EnvGPT, a fine-tuned language model trained on environmental science data, to address gaps left by general-purpose LLMs. It outperformed similar-sized models on benchmarks and neared the performance of much larger AI systems.
Researchers at Tohoku University have developed an AI-driven “materials map” that integrates experimental and computational data, offering a powerful tool to identify high-performing thermoelectric materials. The approach shortens development timelines by guiding researchers to promising candidates at a glance.
A Florida State University study finds that ChatGPT and other large language models are influencing not just writing but also unscripted spoken English. Words like “delve,” “intricate,” and “garner” are showing marked increases in everyday conversations since the chatbot’s release in 2022.
The rise of pseudolaw in Australia is straining courts, police, and councils by spreading false legal theories that mimic the law’s language but lack substance. Now intersecting with generative AI, this movement poses a growing threat to democracy and judicial efficiency.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.