The fusion of AI and prototype design effectively addresses the creativity crisis in innovation. Jake Carter, global director of innovation at Credera, illustrates, through examples and hypothetical scenarios, how AI can generate high-fidelity mockups and even code prototypes.
The adoption of design thinking in the innovation process highlights the critical importance of rapid prototyping and user testing as fundamental elements. This methodology relies on the creation of visual artifacts, or prototypes, to deepen a concept through direct interaction with end users. It transforms abstract ideas into tangible forms that facilitate insightful feedback.
Interestingly, there is an unconventional correlation between the sophistication of a prototype and the depth of knowledge it gathers. IDEO luminary Tim Brown explains this phenomenon succinctly:
“Prototypes should only require the time, effort and investment necessary to generate useful feedback and evolve an idea. The more “finished” a prototype appears, the less likely its creators will pay attention to it and benefit from it. »
To create prototypes that balance realism and flexibility and encourage feedback without leading to confirmation bias, various strategies can be implemented. My teams, for example, start with rudimentary paper sketches to capture and iterate innovative ideas, gradually progressing to more detailed models in Figma. For digital innovations, like mobile or web applications, we link these mockups to give users a realistic experience of the application flow.
The advent of AI is revolutionizing the prototyping landscape. Now, tasks that previously took a designer hours can be accomplished in minutes using AI, enabling rapid creation and deployment of interactive app prototypes.
Prototyping with AI
Consider the following hypothetical scenario: You have been asked to help automobile manufacturer Toyota explore ways to better serve people with mobility disabilities. You’ve decided to focus specifically on the challenges of finding and purchasing a vehicle. In doing so, you explore multiple concepts simultaneously.
One of these concepts introduces a new way of navigating vehicles, focusing not on the vehicle’s specifications but on how it meets the needs of drivers and passengers with reduced mobility. If tasked with this, our team would create a paper prototype (Figure 1 below), which we typically hand off to a UI designer to turn into a high-resolution Figma mockup, as shown in Figure 2.
Paper prototype (sketch) and Figma model
Source: Credera
Legend: Initial paper prototype (sketch) High resolution figma model based on the paper prototype
What if we used AI for prototyping instead?
To explore how well this would work, I provided the above paper sketch to OpenAI’s ChatGPT (using GPT-4 with DALL-E) and asked it to generate high-fidelity mockups based on the sketch. The first attempt (Figure 4) was interesting but deviated significantly from the concept, so I asked ChatGPT to try again but stick to the sketch elements and locations.
The result matched the sketch better, but the text was truncated and the image included several unnecessary elements. I tried again, providing very detailed instructions.
The result was remarkably professional. However, as a prototype, its value would still be limited because the text is unintelligible and the image file itself is not editable.
The guest
Source: Credera
Caption: Prompt given to ChatGPT for image generation
Outputs
Source: Credera
Legend: Figures 4, 5 and 6: First attempt at AI-generated mock-up, enhanced AI-generated mock-up, professional AI-generated mock-up
See more : All eyes on the intersection of risk, research and innovation this year
The good news is that instead of using AI to generate mockups, we can use it to create code prototypes. The advantages are twofold:
- Code prototypes are easier to manipulate once generated, allowing us to adjust the result to better meet our needs,
- Code prototypes provide a level of interactivity that static image mockups lack.
As a test, I gave ChatGPT the same sketch we used above. Instead of asking it to generate an image, I asked it to produce the code needed to create a web application using the Streamlit library. This new Python library makes it particularly easy to create data-driven web applications. ChatGPT dutifully complied, producing the code shown in Figure 7.
The generated code
Source: Credera
Legend: Code generated by ChatGPT for the web application prototype
To turn this code into a prototype, I needed to copy it into a deployable project. I used GitHub Codespaces, a cloud-based development environment with easy integration with Streamlit. I even built on Streamlit’s demo application, meaning all I had to do was insert the new code (Figure 8).
GitHub Codespaces
Source: Credera
Caption: Deploying the prototype using GitHub codespaces
I’ve done some light cleanup in the code to better match the original hand-drawn sketch. First, I added a page navigation bar. Then I introduced tabs for reviews. I asked ChatGPT to create a simple logo for the page to add some polish and add some spacing to the page.
The final result is shown below (Figure 9). Is it perfect? No way. That said, it’s probably enough to convey the concept so we can test it with potential users. Most importantly, production took just 20 minutes, instead of the several hours a UI designer would have needed to create a high-resolution mockup they were ready to share.
For those interested in following this model, it should be noted that there are some dependencies to this approach. First, of course, I needed access to ChatGPT, and since I wanted to use the new GPT-4 model with DALL-E for image generation, I needed a paid subscription to ChatGPT Plus. Second, I needed to install the open source Streamlit library to use these components. Finally, although Streamlit is very user-friendly, I needed basic coding knowledge to manipulate the layout the way I wanted.
The final prototype
Source: Credera
Caption: Final prototype with adjustments, ready for testing
What does it mean for the innovation process that AI can generate realistic prototypes? There are at least four interesting implications:
- Faster prototyping: The ability to prototype faster means we can realistically test more concepts than before. In a typical design sprint, for example, our team would traditionally prototype one to two concepts at most; with AI, we could realistically double this number without extending the sprint time limits.
- Flexibility: It becomes easier to make on-the-fly changes to prototypes because we no longer need to rely on a designer for updates. This allows for faster iteration, even allowing concept adjustments between testing sessions based on what we learn from users.
- More comment possibilities: The prototypes themselves are becoming more disposable, and that’s a good thing! As Tim Brown pointed out, the less we invest in building a particular prototype, the more open we are to hearing critical feedback.
- Integration of human design: At the same time, prototypes may be less useful for creating internal momentum around an idea when generated by AI. This could lead to splitting the “prototyping” stage in the innovation lifecycle into two or more cycles, with the first cycle leveraging AI to sketch out an idea and subsequent cycles using human designers to improve fidelity.
Prototyping is, of course, just one of the many ways in which AI has the potential to change the innovation lifecycle. Research suggests that AI can help us with everything from generating insights about users to brainstorming ideas and even evaluating those ideas.
A recent Wharton’s experiment found that AI is better than MBA students at generating ideas. As we look at the AI landscape, it is reassuring to know that real value can be created by integrating AI into the innovation process.
How can organizations reshape the innovation process with AI-powered prototyping? Let us know on Facebook, XAnd LinkedIn. We would love to hear from you!
Image source: Shutterstock