AI Tools Play Distinct Roles in Creativity: Study Maps How LLMs, T2I, and T2-3D Boost Design Innovation

Not all AI is created equal, especially when it comes to creative design. New research reveals how language, image, and 3D-generating AI tools each shine at specific stages, offering a roadmap for smarter human–AI collaboration in creative workflows.

Research: Creative combinational design through generative AI in different dimensional representations: An exploration. Image Credit: Krot_Studio / ShutterstockResearch: Creative combinational design through generative AI in different dimensional representations: An exploration. Image Credit: Krot_Studio / Shutterstock

Creativity often emerges from the interplay of disparate ideas—a phenomenon known as combinational creativity. Traditionally, tools like brainstorming, mind mapping, and analogical thinking have guided this process. Generative Artificial Intelligence (AI) introduces new avenues: large language models (LLMs) offer abstract conceptual blending, while image (T2I) and T2-three-dimensional (3D) models turn text prompts into vivid visuals or spatial forms. Yet, despite their growing use, little research has clarified how these tools function across different stages of the creative process. Without a clear framework, designers are left to guess which AI tool is the best fit. Given this uncertainty, in-depth studies are necessary to assess the contributions of various AI dimensions to the creative process.

A research team from Imperial College London, the University of Exeter, and Zhejiang University has tackled this gap. Their new study, published in the journal Design and Artificial Intelligence, investigates how generative AI models with different dimensional outputs support combinational creativity. Through two empirical studies involving expert and student designers, the team compared the performance of LLMs, T2I, and T2-3D models across ideation, visualization, and prototyping tasks. The results provide a practical framework for optimizing human-AI collaboration in real-world creative settings.

AI in Combinational Design Tasks

To map AI's creative potential, the researchers first asked expert designers to apply each AI type to six combinational tasks, including splicing, fusion, and deformation. LLMs performed best in linguistically based combinations, such as interpolation and replacement, but struggled with spatial tasks. In contrast, T2I and T2-3D excelled at visual manipulations, with 3D models, in particular, being adept at physical deformation. In a second study, 24 design students utilized one type of AI to complete a chair design challenge. Those using LLMs generated more conceptual ideas during the early, divergent phases but lacked visual clarity. T2I models helped externalize these ideas into sketches, while T2-3D tools offered robust support for building and evaluating physical prototypes. The results suggest that each AI type offers unique strengths, and the key lies in aligning the right tool with the right phase of the creative process.

Expert Commentary

"Understanding how different generative AI models influence creativity allows us to be more intentional in their application," said Prof. Peter Childs, co-author and design engineering expert at Imperial College London. "Our findings suggest that large language models are better suited to stimulate early-stage ideation, while text-to-image and text-to-3D tools are ideal for visualizing and validating ideas. This study helps developers and designers align AI capabilities with the creative process rather than using them as one-size-fits-all solutions."

Implications for Industry and Education

The study's insights are poised to reshape creative workflows across industries. Designers can now match AI tools to specific phases—LLMs for generating diverse concepts, T2I for rapidly visualizing designs, and T2-3D for translating ideas into functional prototypes. For educators and AI developers, the findings provide a blueprint for building more effective, phase-specific design tools. By focusing on each model's unique problem-solving capabilities, this research elevates the conversation around human–AI collaboration and paves the way for smarter, more adaptive creative ecosystems.

Source:
Journal reference:

Comments

The opinions expressed here are the views of the writer and do not necessarily reflect the views and opinions of AZoAi.
Post a new comment
Post

While we only use edited and approved content for Azthena answers, it may on occasions provide incorrect responses. Please confirm any data provided with the related suppliers or authors. We do not provide medical advice, if you search for medical information you must always consult a medical professional before acting on any information provided.

Your questions, but not your email details will be shared with OpenAI and retained for 30 days in accordance with their privacy principles.

Please do not ask questions that use sensitive or confidential information.

Read the full Terms & Conditions.

You might also like...
How a Russian Fake News Site Used AI to Mass-Produce Persuasive Lies