AI Doesn’t Have to Be the Enemy

Alleviating client fears of AI in design processes

6 min readApr 11, 2024

By Kathryn Marinaro, argodesign

Our clients are worried about AI.

They see the potential of adding AI capabilities to their products, but are reluctant to use new tools in their process to create those products. I get it. There are some real and valid concerns around how gen AI is used — what inputs are given, how models are trained, how or if the outputs can be copyrighted, and issues with infringing on others’ copyrights.

Recently we’ve noticed an uptick in contracts addressing the use of AI during the programs we’re hired to deliver. Some companies are even issuing blanket bans on the use of any generative AI tools by design teams. This can create its own unintended impacts, possibly even causing worse outcomes in the work itself, especially as teams have become comfortable utilizing these tools and more design tools are incorporating gen AI into their core feature sets.

Even if you’re not a client, you’ve probably felt some of these same concerns as gen AI becomes more popular and matures. To help companies and individuals think through the correct approach for generative AI, I want to address some common concerns, how you can alleviate those concerns, and how to communicate the positive impact these tools can have on work overall.

Framing & Alleviating Concerns
There are valid concerns about the use of gen AI to create and deliver work, and clients should put some guardrails around AI tools in their contracts. But you don’t need to completely eliminate the use of these tools to guard against risk. There are ways to reduce exposure to risk and still gain the value that gen AI provides to designers and teams. With one recent client, we framed our request to use gen AI around what data we input into any gen AI tool and how we protect the client from copyright issues with the outputs of gen AI tools.

The first concern for clients is that their proprietary data might be inputted into an LLM or other form of gen AI. Inputs to an LLM can be used to train that system to improve its algorithm and outputs. That means that proprietary data is now informing any outputs that are created for any user of this system, not just the client. This process gives competitors the same advantage that the client has. It also risks the privacy of their core data and may have issues for compliance with government policies or other security issues.

There are some gen AI tools that allow you to create sandboxes and ensure inputted data isn’t used beyond a specific, personal instance. But without assessing each AI tool, it’s difficult for a client to fully trust that an agency or design team is going to protect their data without including that stipulation in the contract.

We mitigate this issue by ensuring, and putting into our contract, that we don’t input any of the client’s data or specific details about the proprietary strategy or products into any gen AI tool. This requirement doesn’t impact our design teams much because we’re using the tools in a different way, mostly to improve our internal processes. But it gives our clients peace of mind.

Sometimes clients require us to share the specific tasks we tackle with gen AI so they can better understand the types of inputs and outputs we’re working with. Examples of textual gen AI include:

  • building and assessing research questions
  • generating topics or adjacencies to consider during competitive and comparative research
  • informing and improving wording for workshop activities
  • helping as a brain buddy to improve frameworks for understanding complex systems.

We utilize image generation to express ideas, build mood boards, and as inspiration to our art direction process for presentation materials. Seeing how gen AI supports our design process helps clients feel comfortable, especially once they see that none of the tasks have a direct impact on their product or their data.

The second major concern clients have is around using the outputs of gen AI. Copyright drives most of this concern — both owning the copyright to outputs and potentially infringing on existing copyrights due to gen AI’s referential nature.

The US has already determined that image outputs of generative AI tools can’t be copyrighted, even if it was an individual’s prompt engineering that created the image. That’s an issue for anyone who wants to use AI-generated images for external publication like an ad campaign or other placements because everyone else can use it without permission. Other types of AI outputs haven’t been tested in the legal system yet, so it’s undetermined who “owns” textual outputs or other forms of creative use of gen AI. The other side of this concern is that generated materials could too closely reference specific copyrighted pieces, such as a certain piece of art or character or text that’s close enough to be considered plagiarizing.

For both of these concerns, we try not to use AI outputs straight from the tool. We always have some sort of curation, edits, improvements, or final touches that we make to the materials. These improvements create ownership and a copyright-able end product, but we take this approach for more than just that reason. We consider our AI tools as an “unpaid intern” and, just as we would never deliver an intern’s work directly to a paying client, we would also never use an AI’s output without careful consideration of its quality, double-checking its references, reverse image searching to see if it’s too referential, or improving it to meet our standards.

In addition to this mental model, we like to discuss how our design teams primarily use gen AI for our internal processes and rarely for final deliverables. Generated images may appear in inspiration or process presentations in order to create directions to consider before our designers create the final image in a different tool such as Figma or Illustrator. The artifacts we deliver at the end of a program are typically screen designs, design systems, strategic documentation, and detailed documentation. These artifacts are almost never created by gen AI and always have the final touch of a designer.

Impact of the Tools for Teams
I also think it’s important to communicate the positive impact these tools have on design teams’ processes and the value that using gen AI tools provides to the client. The main reason our design teams use generative AI tools is to improve the quality and speed of our processes.

Since these tools act like an unpaid intern, they can provide almost an additional person’s worth of value to the team without the cost of an additional employee. By leaning on AI to do some of the grunt work and make our process more efficient, we create more time for the uniquely human parts of design like empathy-building, cross-pollination, strategic decisions, and squishy, cognitive leaps.

One final point is that design tools are beginning to embed AI features into their toolset. Our main tool for screen design and ideation is Figma and Figma Jam, and as of this writing, they’ve begun incorporating individual features that utilize gen AI in different ways. There are also a multitude of plugins for these tools that use AI to improve the process of designing in Figma. These features are sometimes optional or can be turned off, but by adopting a blanket prohibitive statement against the use of generative AI, clients are unnecessarily handicapping the teams they’d like to work with.

Clients have valid concerns for the inputs and outputs of gen AI in relation to their sensitive data and end products. By taking the time to address and discuss how to alleviate those concerns, design teams can continue to deliver high-quality work to security-sensitive clients. This discussion provides a useful reminder to designers to ensure we’re managing clients’ data properly, taking the time to understand how our AI tools are trained, and treating any AI outputs as we would an unpaid interns’ work (i.e. editing, curating, and improving it). Hopefully this article gives you the framework you need to advocate for, and properly use, the best gen AI tools for your next project!

Kathryn Marinaro is an award-winning Creative Director who envisions the future and develops products and strategies for a wide variety of clients at argodesign. She is the author of Prototyping for Designers, published by O’Reilly, and has employed user-centered methodologies to create and iterate on impactful experiences in health wearables, AI interaction patterns, AI image recognition and training interfaces, and cloud development tools, while working on world-class design teams like IBM Watson Visioneering and IBM Mobile Innovation Lab.




We are a product design firm. We love design – for the technology, for the simple joy of craft, and ultimately for the experiences we create.