Can AI help you win your next brief?

Although the very first step in the lifecycle of a creative project, it’s always easy to fall at the first hurdle of understanding the brief. With AI tools now available, is this new collaborator a team’s newest asset, or do too many (mechanical) cooks spoil the broth?

Share

In this first article of Shades of Intelligence, a new series investigating artificial intelligence and the creative industry, we look at how machine learning could alter the first three steps in any creative project.

Whether you’re curious or cautious about the potential impact of AI on your practice, learn where it can be implemented – and the positives and negatives – via creative case studies and use cases by those in the know. This article looks at three initial steps: defining the brief, idea generation and pitching. For further investigations into the later steps of the creative process, head here.

Even if you’re inexperienced or rightfully wary of adopting machine learning into your creative practice, one benefit of AI most are aware of is its ability to simplify and, in turn, potentially solve problems.

Perhaps there’s a piece of research you need to understand but the language is stuffy and overcomplicated. Artificial intelligence can support by providing a synopsis. Or maybe you’re struggling to find a way to clearly communicate a point at work. Language tools, like ChatGPT, may be able to clarify the issue and speak to it directly. In our everyday digital experiences, similar tools are already guiding us unknowingly: the spell check monitoring this article, the map app that guided your journey this morning, the search function that led you to this very site.

With this in mind, it’s interesting to consider whether adopting AI tools at the very beginning of a project might help creatives to get to the crux of the problem hidden in a brief. Will it miss the mark completely or translate client business-speak and let the team concentrate on what they do best?

Step One: Defining the Brief

Advertising agency Creature London has been quick to adopt AI in its processes already, having launched two bespoke AI tools for its teams to use. One, Impala, speaks to this initial step in any creative project. It’s a strategy-focused tool, as Meg Egan, a senior creative at the agency, explains: “It can get a client from a blank piece of paper to a fully tested proposition in a week.” The second tool, Magic Ant, concentrates more on production, allowing the agency to take on clients with smaller budgets, given visuals can be made more quickly and at a cheaper agency cost than before.

The former speaks to how Meg, an individual curious about machine learning’s creative capabilities, uses “AI as a shortcut, especially in the initial ideation phase,” she tells us. “If a client is struggling to articulate exactly what they want, you can lay out lots of hypothetical scenarios quickly using AI – be it ChatGPT, Dall-E or MidJourney – and help them visualise it.” Like the ideation phase of any pitch or project, much of what machine learning tools produce is likely to be wrong, but the process still “means you’re a step closer to what’s right,” she says. “And you haven’t burnt out all your brainpower doing it.”

A recent example of this process was a pitch at Creature London in which the client’s brief called for a new brand mascot. The team still huddled together to develop character profiles, but rather than sketching them as they would have just a few months ago, “we simply fed these characters into MidJourney and instantly got lots of interesting results back,” says Meg.

When using machine learning tools in this way, how an artificially generated idea or visual might be represented in the final outcome is still debatable. “AI outputs can be wild. Just ask it to try and imagine two people arm wrestling and you’ll see what I mean. Fingers everywhere,” says Meg. At the other end of the spectrum – outside of otherworldly (but often off-the-mark) creations – AI outputs can also be predictable. “If you’re not using unique inputs and just relying on the AI to figure it out, you can quickly become part of a big homogeneous blob.”

With this in mind, and taking her own experience into account, if you’re considering using AI’s creative capabilities to help you define a brief, Meg recommends considering it as a “co-pilot” in your work. “We’re still steering the ship,” she says, “and in a world where client needs are more demanding than ever, and everything moves so fast, if we don’t embrace this time-saving new way of working, we risk being left behind.”

Step Two: Idea Generation

If the above has convinced you to onboard an AI tool as a creative co-pilot, the next step you may be considering is generating the idea in more detail. Can enrolling an AI team member further the potential of a concept you thought of or will one too many (mechanical) cooks spoil the broth?

The way society collaborates and connects via technology is the key area that interaction designer Aosheng Ran investigates. Originally from Sichuan, China, and now based in San Francisco, Aosheng’s practice creates digital spaces for coexistence – an area they’re currently examining at collaborative design tool Figma. It’s here that Aosheng can be found imagining “digital spaces that challenge the paradigms,” and asking questions like: “Could meeting online feel more like sitting by a big dining table, where we could feel our presence and see each other’s body language?”

One example of Aosheng’s exciting practice – they were also a founding designer at Sprout, “a cosy, playful and human computing medium for people to be together” – is their work on Jambot. An AI-driven function integrated into Figma’s whiteboard tool FigJam, Jambot is a widget that encourages teams to “ideate, summarise and riff”. Essentially, as Figma’s engineers Dan Mejía and Sam Dixon set out for the team: “a visual version of ChatGPT”.

For anyone unfamiliar, FigJam is an infinite digital canvas where teams can collaborate, specifically those working in a visual medium. For Aosheng, the usefulness of large language models (LLMs) like ChatGPT are often restricted by the text-based functionality they use. “Though it’s been nearly a year since ChatGPT was released, the majority of GPT-powered apps adopted a similar form of customer support website pop-ups. It feels like we’re talking to a computer through a keyhole,” they say. “LLMs are increasingly capable of appearing as humans and interacting in natural languages, but we had to stare at a single text field, limit our modality of expressions and act like computers.”

In light of this, Jambot builds upon the identifiers of visual collaboration within FigJam and takes shape in “little node connectors that indicate input and output, which are represented as stickies.” In practice, if you ask Jambot “to ideate, answer a question, or explain a topic, it chains suggested next steps to keep you in the flow state,” says Aosheng.

This “flow state” Aosheng refers to is a state of mind that they see as necessary for idea generation. Outside of the sense of playfulness FigJam has aesthetically (“we’re allowed to be comfortable and goofy here”), one particular function, “rabbit hole”, motivates unexpected routes by suggesting adjacent topics. “For example, when you apply the ‘rabbit hole’ function to a concept you’re learning, or a paragraph that you read somewhere (e.g. Flamenco), Jambot will output a list of relevant sub-topics to dig deeper (the history, regional variations of Flamenco music, etc.)” This capability extends to group settings with multiplayer interaction, an element still relatively rare for AI tools.

Although Aosheng’s practice, and the work they’ve brought to life at Figma, suggest a familiarity with Meg’s “co-pilot” analogy, from a personal perspective the designer admits, “I oscillate between optimism and scepticism” when it comes to integrating AI into creativity. Like many curious of its capabilities, “I’m excited by the future where AI acts as a collaborator that inspires new creative thought,” they say, while they’re wary “of assumptions that we can optimise the entire world with simulation and autocomplete our way into the future.”

By nudging concepts down rabbit holes or facilitating a sense of comfort amongst team members – digital or otherwise – tools like Jambot present the option of using artificial intelligence as another brain to bounce ideas off. The fully formed idea is unlikely to develop from its ramblings, but an unanticipated route just might.

As Aosheng sees it: “Ideas grow between chaos of work-in-progress through repeated trial and error.” Within a team, in an IRL setting, “Good human collaborators bring ideas that inspire us to make them better and discover new ways of thinking.” On the other hand, useful artificial intelligence programmes or “good AI collaborators” don’t present output “as a final verdict; they share suggestions in a tangible, low-fidelity form, so that you’re invited to build upon them.”

Step Three: Pitching

The much-anticipated next step of any creative project is the big pitch. Whether you’re pitching internally or externally, distilling your idea into an engaging but bitesize presentation is often a struggle – especially if you and your team have jumped in and out of rabbit holes to get there.

An individual with plenty of experience pushing the boundaries of creative technology – using it as a tool to create work, but also as a helpful team component – is Matteo Loglio. An interaction and product designer, Matteo has created real-life AI products like NSynty Super, an AI musical instrument, during his time at Google Creative Lab (later used by Grimes on her album Miss Anthropocene). But he’s also created simpler executions of technology like Primo Toys, educational robots for children. Now operating his own studio oio as co-founder and design director, Matteo leads “a hybrid distributed team of human and other intelligences”. It even has an AI creative director: Roby.

With a mission to “design future products and tools for a less boring future,” much of oio’s work sits at the early stage of product development, “towards the conceptual bit”, as Matteo puts it. The studio has plenty of experience using generative AI by responding to briefs “shaped around finding use cases of AI in the current (and future) product landscape,” explains Matteo. Put plainly: “As a creative company working with emerging technology, we’re deep in the AI rabbit hole. Eating our own dogfood is one way of learning and experiencing firsthand the future of creativity with AI.”

When it comes to bringing this technology in at the pitch stage of a project, Matteo admits that it “really depends on how the tool is actually used”. Mirroring the opinions of Meg and Aosheng, he says: “AI alone probably won’t make a significant difference in your pitching process. It may give you some confidence, but if used carelessly, everything looks dull and artificial.”

If you are looking to build upon the art of pitching and artificially generated tools feel like a suitable option, Matteo’s first piece of advice is to simply practise, “just like with any other tool”. If involving an AI in the idea generation process feels a step too far, how about using LLMs to test your concept instead? “Running your ideas through large language models, such as Bard or ChatGPT, can be a good exercise to get feedback, the same way you would send your work to some of your friends and colleagues,” says Matteo.

However, it’s worth keeping in mind that “the outputs of these models are often superficial, but they can be just another point of view you may not have considered.” It’s also worth noting that the predictability of generative systems can be a huge benefit when it comes to the logistical considerations of a project. This can be helpful if you’re pitching in a second language or perhaps needing to distil your thinking into a script outline you can build upon. “It’s definitely another tool under your belt, and mastering it while syncing all the other many tools can absolutely be beneficial across different stages of a project, pitching included,” says Matteo. In short: “It’s like talking to a general expert.”

As we come to the end of investigating how AI could potentially alter those first few steps of a creative brief or idea, one overall highlight is clear: working with machine learning, in a creative capacity, has to be completed in conjunction with real-life teams or individuals.

This sense of collaboration, with a healthy dose of caution, has been clear in each of our conversations. Even those regularly working with AI argue that human-centred creativity will remain, despite the changes these emerging tools will bring. As Matteo points out: “Just like the iPhone turned designers into UX and UI specialists, soon designers will have to adapt. New professions will have to fill new gaps in the creative market. I think that’s exciting.”

Aosheng, on the other hand, is hopeful for a future where AI collaborators push our ideas to new heights. “I believe it is important to make room for our minds to explore – where ineffable feelings could emerge, irreducible thoughts could marinate, and undetermined meanings could form,” they say. There is also hope, in Meg’s eyes, for a more democratic playing field in terms of tool adoption. While she points out that “cheapness and efficiency shouldn’t win over quality,” she also feels that, if used wisely, AI could offer access to “visuals and effects that, until recently, were reserved for those with huge budgets. We’re witnessing the start of a creative revolution. Suddenly we’re more creative than ever, and I for one think that’s wonderful.”

To explore how artificial intelligence tools are impacting the later stages of the creative process, continuing reading Shades of Intelligence below.

Glossary

ChatGPT: Free to use, ChatGPT is a large language model chatbot created by OpenAI. Users can use it to gather information by asking the model questions or improve their writing with its guidance.

Midjourney: Midjourney is an AI tool which can create text-to-image generations from inputted prompts. It is currently available via Discord where users will receive four images to a prompt, before choosing which they would like to upscale.

Dall-E: Developed by OpenAI, Dall-E is another text-to-image AI tool currently available. Users can create new imagery from prompts or “outpaint” to expand images, “inpaint” to create realistic edits. You can also create “variations” on inputted imagery resulting in different variations.

Figma: Figma is a collaborative interface design tool hosted on a web browser. It is a helpful tool when it comes to generating ideas amongst teams, prototyping and developing products. It is particularly popular with web designers.

LLMS: A large language model, known as an LLM, is an AI algorithm that uses large sets of data to understand and generate responses to inputs. ChatGPT is an example of an LLM.

Google Bard: Google’s LLM tool, Bard, can generate text, translate languages and write content, or just “explain things simply”.

Share Article

About the Author

Lucy Bourton

Lucy (she/her) is the senior editor at Insights, a research-driven department with It's Nice That. Get in contact with her for potential Insights collaborations or to discuss Insights' fortnightly column, POV. Lucy has been a part of the team at It's Nice That since 2016, first joining as a staff writer after graduating from Chelsea College of Art with a degree in Graphic Design Communication.

[email protected]

It's Nice That Newsletters

Fancy a bit of It's Nice That in your inbox? Sign up to our newsletters and we'll keep you in the loop with everything good going on in the creative world.