A chatbot is a software application designed to simulate human conversation. It interacts with users through messaging platforms, websites, or mobile apps, using pre-set scripts or artificial intelligence technologies to understand queries and provide responses. Chatbots are often used for customer service, information retrieval, or as virtual assistants.
Researchers propose revisions to trust models, highlighting the complexities introduced by generative AI chatbots and the critical role of developers and training data.
Generative chatbots significantly increase the formation and persistence of false memories during simulated crime witness interviews, raising ethical concerns about their use in sensitive contexts.
Aleph Alpha has introduced the Pharia-1-LLM-7B models, optimized for concise, multilingual responses with domain-specific applications in automotive and engineering. The models include safety features and are available for non-commercial research.
Researchers explored using transfer learning to improve chatbot models for customer service across various industries, showing significant performance boosts, particularly in data-scarce areas. The study demonstrated successful deployment on physical robots like Softbank's Pepper and Temi.
A study published in Future Internet explored the use of multimodal large language models (MLLMs) for emotion recognition from videos. The researchers combined visual and acoustic data to test MLLMs in a zero-shot learning setting, finding that MLLMs excelled in recognizing emotions with intensity deviations, though they did not outperform state-of-the-art models on the Hume-Reaction benchmark.
Researchers recently introduced the CHEW dataset to evaluate large language models' (LLMs) ability to understand and generate timelines of entities and events based on Wikipedia revisions. By testing models like Llama and Mistral, the study demonstrated improvements in tracking information changes over time, thereby addressing the common issue of temporal misalignment in LLMs.
Researchers explored the potential of large language models (LLMs) like GPT-4 and Claude 2 for automated essay scoring (AES), showing that these AI systems offer reliable and valid scoring comparable to human raters. The study underscores the promise of LLMs in educational technology, while highlighting the need for further refinement and ethical considerations.
Researchers introduced "Chameleon," a mixed-modal foundation model designed to seamlessly integrate text and images using an early-fusion token-based method. The model demonstrated superior performance in tasks such as visual question answering and image captioning, setting new standards for multimodal AI and offering broad applications in content creation, interactive systems, and data analysis.
A recent Meta Research article explored semantic drift in large language models (LLMs), revealing that initial accuracy in text generation declines over time. Researchers introduced the "semantic drift score" to measure this effect and tested strategies like early stopping and resampling to maintain factual accuracy, showing significant improvements in the reliability of AI-generated content.
Researchers explored whether ChatGPT-4's personality traits can be assessed and influenced by user interactions, aiming to enhance human-computer interaction. Using Big Five and MBTI frameworks, they demonstrated that ChatGPT-4 exhibits measurable personality traits, which can be shifted through targeted prompting, showing potential for personalized AI applications.
Researchers compare AI's efficiency in extracting ecological data to human review, highlighting speed and accuracy advantages but noting challenges with quantitative information.
In a Nature Machine Intelligence paper, researchers unveiled ChemCrow, an advanced LLM chemistry agent that autonomously tackles complex tasks in organic synthesis and materials design. By integrating GPT-4 with 18 expert tools, ChemCrow excels in chemical reasoning, planning syntheses, and guiding drug discovery, outperforming traditional LLMs and showcasing its potential to transform scientific research.
Researchers advocate for a user-centric evaluation framework for healthcare chatbots, emphasizing trust-building, empathy, and language processing. Their proposed metrics aim to enhance patient care by assessing chatbots' performance comprehensively, addressing challenges and promoting reliability in healthcare AI systems.
In Nature Computational Science, researchers highlight the transformative potential of digital twins for climate action, emphasizing the need for innovative computing solutions to enable effective human interaction.
A groundbreaking study in Scientific Reports delves into the emotional responses of AI chatbots, revealing their capacity to mimic human-like behavior in prosocial and risk-related decision-making. ChatGPT-4 emerges as a frontrunner, showcasing heightened sensitivity to emotional cues compared to its predecessors, marking a significant stride in AI's emotional intelligence journey.
In a groundbreaking study, researchers detailed how ChatGPT-4 chatbots exhibited remarkably human-like behavioral and personality traits in Turing test scenarios and classic behavioral games. Through interactive sessions and comprehensive analyses, the study unveiled ChatGPT-4's tendencies towards altruism, fairness, trust, cooperation, and risk aversion, offering profound insights into the adaptability and responsiveness of AI in diverse scenarios.
This paper outlines ten principles for designing elementary English lessons using AI chatbots, addressing crucial aspects like media selection, motivation, feedback, and collaboration. Through a rigorous methodology involving expert validation and usability evaluation, the study offers practical guidelines to bridge the gap between theoretical insights and effective implementation, paving the way for enhanced language instruction and educational adaptability in diverse contexts.
A study analyzing ChatGPT's responses on ecological restoration reveals biases towards Western academia and forest-centric approaches, neglecting indigenous knowledge and non-forest ecosystems. Urgent measures are proposed to ensure ethical AI practices, including transparency, decolonial formulations, and consideration of gender, race, and ethnicity in knowledge systems. Addressing data access and ownership issues is crucial for promoting inclusivity and transparency in embracing environmental justice perspectives.
This study from Stanford University delves into the use of intelligent social agents (ISAs), such as the chatbot Replika powered by advanced language models, by students dealing with loneliness and suicidal thoughts. The research, combining quantitative and qualitative data, uncovers positive outcomes, including reduced anxiety and increased well-being, shedding light on the potential benefits and challenges of employing ISAs for mental health support among students facing high levels of stress and loneliness.
This study explores the acceptance of chatbots among insurance policyholders. Using the Technology Acceptance Model (TAM), the research emphasizes the crucial role of trust in shaping attitudes and behavioral intentions toward chatbots, providing valuable insights for the insurance industry to enhance customer acceptance and effective implementation of conversational agents.
Terms
While we only use edited and approved content for Azthena
answers, it may on occasions provide incorrect responses.
Please confirm any data provided with the related suppliers or
authors. We do not provide medical advice, if you search for
medical information you must always consult a medical
professional before acting on any information provided.
Your questions, but not your email details will be shared with
OpenAI and retained for 30 days in accordance with their
privacy principles.
Please do not ask questions that use sensitive or confidential
information.
Read the full Terms & Conditions.