AI is employed in education to personalize learning experiences, provide adaptive feedback, and automate administrative tasks. It utilizes machine learning algorithms, natural language processing, and data analytics to enhance student engagement, optimize teaching methods, and streamline educational processes, leading to more effective and personalized education.
Researchers in Korea have developed MoBluRF, a two-stage deblurring framework that enables sharp 3D scene reconstruction from blurry monocular videos. This innovation advances NeRF technology by separating camera and object motion, achieving crisp novel view synthesis even from casual handheld footage.
In a study published in the International Journal of Mathematical Education in Science and Technology, researchers found that ChatGPT tackled Plato’s ancient “doubling the square” puzzle not by recalling the well-known solution but by improvising new approaches, sometimes making human-like errors. The findings suggest that ChatGPT’s problem-solving mimicked learner-like behavior, blending retrieval and on-the-fly reasoning.
Dr Jan Burzlaff of Cornell University argues that human historians remain essential in the AI age, especially in preserving the emotional weight of traumatic events. His study shows that ChatGPT failed to convey the ethical complexity and raw suffering in Holocaust survivor testimonies.
A Wake Forest University researcher is developing robust multi-agent reinforcement learning (MARL) algorithms to ensure AI systems can function safely even when individual agents fail. Backed by an NSF CAREER award, her work will enhance AI reliability in high-stakes settings like healthcare, disaster response, and environmental protection.
The University of Texas at San Antonio has launched the College of AI, Cyber, and Computing, uniting programs in artificial intelligence, cybersecurity, computing, and data science to serve over 5,000 students and drive workforce growth. Located in San Antonio’s urban core, the college will foster industry collaboration, innovation, and regional economic development.
Shiyan Jiang, a new faculty member at Penn GSE, is pioneering inclusive approaches to AI education by embedding AI across diverse subjects and empowering all teachers, not just computer science specialists, to be AI educators. Her work focuses on building both teacher and student AI identities to make AI education accessible, meaningful, and identity-driven.
This paper by Yphtach Lelkes and Neil Fasching is the first large-scale comparative analysis of seven major AI content moderation systems, evaluating their consistency in detecting hate speech across 1.3 million synthetic sentences. The findings reveal substantial inconsistencies, with some systems flagging identical content as harmful while others deem it acceptable, especially for certain demographic groups.
The paper argues that traditional educational research is failing to impact practice and policy due to entrenched methodological flaws, and that the rise of Artificial Intelligence demands a complete epistemological overhaul. It calls for ethical vigilance, methodological pluralism, and innovative, AI-informed approaches to studying education.
For the first time, scientists have used AI and satellite imagery to count the Great Wildebeest Migration, finding fewer than 600,000 animals, less than half the long-standing estimate of 1.3 million. The results highlight new conservation challenges and a breakthrough in wildlife monitoring.
A new AI-powered Virtual Research Assistant has cut astronomers’ daily workload by 85% by automatically filtering millions of sky alerts and identifying genuine supernova signals. In its first year, it reduced human checks while retaining over 99.9% of real events, paving the way for the data avalanche from the Vera Rubin Observatory.
UC Riverside researchers have created a certified unlearning method that removes sensitive or copyrighted data from AI models without needing access to the original training data. Their framework uses surrogate datasets and calibrated noise to provide provable guarantees while maintaining model performance.
Students in a university programming course reported using AI tools like ChatGPT mainly for debugging and understanding code. However, frequent AI use was negatively correlated with academic performance, raising concerns about over-reliance and unguided integration into learning.
Researchers developed EnvGPT, a fine-tuned language model trained on environmental science data, to address gaps left by general-purpose LLMs. It outperformed similar-sized models on benchmarks and neared the performance of much larger AI systems.
Researchers at Tohoku University have developed an AI-driven “materials map” that integrates experimental and computational data, offering a powerful tool to identify high-performing thermoelectric materials. The approach shortens development timelines by guiding researchers to promising candidates at a glance.
A Florida State University study finds that ChatGPT and other large language models are influencing not just writing but also unscripted spoken English. Words like “delve,” “intricate,” and “garner” are showing marked increases in everyday conversations since the chatbot’s release in 2022.
The rise of pseudolaw in Australia is straining courts, police, and councils by spreading false legal theories that mimic the law’s language but lack substance. Now intersecting with generative AI, this movement poses a growing threat to democracy and judicial efficiency.
Researchers at Nanjing University of Science and Technology and Warsaw University of Technology developed DFAMFPP, a deep learning-enabled 3D imaging system that breaks sensor frame-rate limits. The method reconstructs multiple high-precision 3D frames from a single image, enabling ultrafast visualization of high-speed phenomena.
Researchers developed R3DG, a new multimodal sentiment analysis (MSA) framework that aligns text, audio, and video at multiple granularities. It captures emotional nuances more effectively than existing methods while significantly reducing computational time.
An AI-powered chess bot named Allie, developed at Carnegie Mellon University, mimics human thought processes and playing styles. Unlike traditional engines, Allie was trained on millions of human games and adapts to different skill levels, making it more relatable and instructive for players.
A new study from the Icahn School of Medicine at Mount Sinai shows that AI chatbots frequently repeat and elaborate on false medical information embedded in user prompts. However, a simple warning line added to the input cut these hallucinations nearly in half, offering a promising safety solution.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.