Note: This is part 2 in a series. Read part 1 here.
Note from the Author: The following article was not generated with ChatGPT or other artificial intelligence text generators. I feel that analyzing AI in the context of design requires self-immersion and reflection to really share our discoveries and express what we’ve found. Images were generated in Midjourney 4.
When I first entered an image prompt into Midjourney and hit the enter key, my reaction was simply: Wow! This was wild.
/imagine an elegant lamp
“Any Sufficiently advanced technology is indistinguishable from magic.”
The next challenge, like any new tool, was learning how to use it to make what you have envisioned in your head – which certainly wasn’t a Victorian-era lamp! I kept using it, trying different word salads to achieve a design aesthetic for a nightstand lamp, fan, and clock, all things I’ve designed in the past.
As I continued to generate designs with AI, I felt a mix of excitement and unease – mostly due to the uncertainty of it all. Could AI, computers, and machines replace us all? An interesting comment I read was that AI was supposed to free us from hard labor so that we could be creative, yet here it appeared to be taking away creativity. Undoubtedly, society and civilization will change because of AI, but how will remain to be seen.
For initial AI exploration, I opted to use Midjournery for our testing because quite frankly, it had the best looking visuals. It was therefore unsurprising to hear during a Discord Q&A session hosted weekly by Midjourney Founder, David Holz, that Midjourney “focuses on making beautiful images and to help people explore.”
What we tried first was a series of prompts to hone in on a design similar to what I’ve done in the past. Boiled down to its essence, a product is really just a concept with a design language which is designed to be manufacturable. Focusing on just the first two, could I achieve a similar concept and design language using only an AI text-to-image generator?
What I learned was that more than half of my time was spent trying to find the right words to explain what’s in my head to a computer. As a designer who can already sketch, it would be far more efficient to have just sketched out this linework. For initial exploration, it seems more appropriate for someone who can’t sketch, or if you’ve hit writer’s block. With enough finessing of word prompts, there were some decent designs, at least as a starting point.
What we also learned was that Midjourney had trouble comprehending “the concept.” When asking for it to generate a lamp in the style of a certain designer, it created a white block on a stand without a light or even an indication of where a light could be. It looked like anything but a lamp. When asking for it to generate a desk fan, it somehow combined a propeller fan and a hand fan. And when asking for it to generate an alarm clock, it blended a digital clock and an analog clock.
That being said, while doing initial AI design research, we came across an interesting example of product combinations made by a marketing Creative Director, Eric Groza, who created an imaginary collaboration between Patagonia and Ikea (see below image). This was a fun example and a way to explore how a product could look with another style or fusion of brands applied to it. Since then I’ve found CollXbs made by Brxnds which showcases these kinds of design fusions as inspiration for marketing.
Image Credit: Eric Groza
While I’ve highlighted some of the challenges we’ve encountered with utilizing AI in design over the last ten months, there are positives as well. Here’s where we see it working so far:
Idea understanding: During early strategy phases you can generate ideas around possible paths. We’ve even had a couple of clients come to us with AI-generated imagery as starting points. It creates a conversation starter with clients around what they do and don’t like, which can help focus the design work in later phases.
Style exploration: It can be used to apply different styles to the same object and even to generate some fun thought-provoking collaborations – a way to explore design languages.
Mood board generation: You can generate images from scratch which can be used in moodboards (more on this in later posts), another way to explore design languages.
Escaping writer’s block: If you’re stuck in exploration, it’s a great way to loosen those gears. I’ve found that even “stupid” ideas can give birth to compelling concepts.
Sketching underlays: If you need a quick and dirty underlay, you can generate something without having to go into CAD and spend time there, although we prefer CAD for greater accuracy.
Render backgrounds: If you need a scene or abstract imagery for a render backdrop, text-to-image AI generators work pretty well.
In the next part of this series, I’ll provide a summary of my thoughts and takeaways on AI in design to date, and what to look for in the future.
And speaking of the future, we’ll be attending CES 2024, where I’m looking forward to seeing companies who are exploring the use of AI like Samsung and LG. If you’ll be there, message me on LinkedIn. Have your people AI talk to my people AI!
David Bogdal is Director of ID & UX at Spanner.
He enjoys the challenge of working on a wide range of client products which all require multi-faceted approaches. Outside of work, David's into movies, house projects, and family time.
Interested in learning more about what it’s like to collaborate with Spanner?