Generative AI’s Seeming Ability To Do So Much With So Little Merits Caution
By Pat Marsh
Time appropriated, not saved
The tide has turned significantly against generative AI tools in recent weeks, partially due to the shoddy results that have been cranked out and thrust upon us. Google ultimately pulled an Olympics ad promoting its chat client Gemini, due to the uproar over it, and ProCreate recently announced it will never integrate gen AI tools. With school returning, there has been no shortage of consternation around AI’s role in education. These are unsurprising developments, really. Any digital tool left to its own devices is guaranteed to spin off the rails. The entire point of technology is to help humans do more, better. Removing the human touch turns it into an exercise in automation.
This was all brought home to bear for me personally when a recent project I worked on involving a microcontroller and ChatGPT shed light on the nuanced interplay between these advancements and their practical implementation. Three things became apparent over the course of my experiments: 1) AI affords a wealth of expanded capabilities (almost to a fault), juxtaposed against 2) the inherent limitations and challenges of said tools, which necessitates 3) an evolving role of human expertise in maintaining a balance between the two.
Rapid prototyping on steroids
My task was to integrate a 32-bit microcontroller with a monochrome display to create a UI for an embedded product. The microcontroller, equipped with WiFi and Bluetooth, operates as a tiny, powerful computer capable of being remote-controlled with a mobile web app or serial commands. As I initially had limited knowledge about these components, I used ChatGPT to quickly gain insights and identify the right tools and techniques. The AI tool’s ability to provide visibility into cutting-edge products and align specific project requirements with appropriate hardware and software solutions not only saved me time, but exposed and contextualized a raft of new information to better inform the project.
This ability to “shop globally” and access a vast array of resources through AI represents a paradigm shift in how engineers and developers approach problem-solving. The traditional reliance on outdated methods for finding components and solutions is being replaced by AI-driven recommendations, streamlining the development process. However, this increased capability comes with a need for an appropriate level of rigor and understanding to fully harness these tools. Too often real-world applications don’t come down to a strict “this is better than that,” but more so an understanding of the nuanced tradeoffs between choices.
Staying sane in a blizzard of information
While AI significantly accelerated the development process, I encountered several issues in which ChatGPT’s suggestions were incorrect or incomplete, particularly when dealing with nuanced versions of the microcontroller. For instance, the open-source ESP32 microcontroller has been made by many different manufacturers. ChatGPT couldn’t wrap its ‘brain’ around this and kept providing the wrong information, often referring to the more generalized versions. This underscores the importance of human expertise in this process — in practice it forces more inspection of the details behind a problem.
I also encountered a critical issue with GPT’s handling of memory and its tendency to generate excessive amounts of information. It has a propensity to flood users with data, often without the necessary context or accuracy, leading to a “needle in a haystack” problem. This issue is exacerbated by AI’s inability to discern subtleties and context, resulting in a need for aggressive and exhaustive interrogation of AI-generated outputs.
For example, most responses for this project resulted in GPT always responding with exhaustive code. To mitigate this, I asked GPT only to provide code when I asked for it and even then it quickly reverted. The balance between leveraging AI for efficiency and managing the resulting complexity is a delicate one that requires careful consideration. We are in the infant stages of managing or even translating memory as a contextually nuanced feature. The current ChatGPT interface for memory management is woefully inadequate.
The evolving role of human expertise in managing AI technologies
This project exemplified the dual nature of gen AI tools: while they can augment human capabilities and accelerate development, they also impose new demands on users. The role of the developer is transformed into that of an orchestrator, balancing the flexibility and power of AI with the necessity for meticulous oversight and management.
It’s therefore vital to implement best practices, such as versioning and roadmap planning, to maintain control over the development process, and gradually evolving the code with stable milestones.
AI tools, while powerful, require a disciplined approach to avoid creating additional work and complexity. The analogy of walking through a forest and inadvertently stepping into a different universe captures the disorienting potential of AI-driven development. Micro-versioning and maintaining a separate source of truth are good strategies for addressing this.
Choosing your moment for greater success
I’ve found this proliferation of generative AI to be invaluable for creative work if you have a healthy perspective about when and how to use it. It’s great at the edges of the process, as a top-down and bottom-up tool to expand and explore ideas. It doesn’t do the middle work well, demanding constant interrogation to arrive at your destination. And it will always give you the average answer. In my project, GPT repeatedly offered generic methods — advice like, ‘make sure your hardware has power’ and ‘use a multimeter to check voltages for your circuits’. (Spoiler alert: I never did either of these, as they weren’t needed to solve any actual problem I had.) it’s akin to talking to the first customer service person — ”Did you try restarting your computer?”
In general, don’t let AI be a crutch. I think everyone intuitively understands that some aspects of work are more enjoyable than others — and that some things that are difficult make you a more valuable contributor to the business. It’s tempting to drive everything through generative tools, but you will find that AI gives you a ton of information to read, peppered with mistakes it doesn’t acknowledge unless you point them out. Without a clear goal, you might find yourself with more information and less direction than you would have if you did the work more intensely yourself.
Assessing the value for businesses
AI will raise exposure to new ideas to create more work — and better work.
Using AI isn’t about removing work or saving time; it’s about putting the possibilities that were previously out of reach into the workstream so they can be utilized more effectively. Without using generative AI, I could not have built a working proof-of-concept within the timeframe the project needed. It amplified my ability tenfold. That is simply amazing.
AI needs a babysitter.
With such powerful tools, mistakes are amplified quickly and because the context of concern is often never fully translated into these systems, the responsibility to isolate the output of AI is on the user. I deliberately abstracted aspects of the project that were sensitive to the client and user — using code names and abstracting Personally Identifiable Information are good starting points.
AI should enhance curiosity, not replace or stifle it.
At least for the moment, these models don’t have advertising priorities built into them, so you can get broader recommendations on products, approaches, and techniques than you would otherwise. I’ve also found it helpful in providing better terminologies for solving problems — there are many nuances to technical terms and even just using generative AI to expose these to you is advantageous.
Psychiatrist and author Phil Stutz once remarked that “You have three aspects of reality that nobody gets to avoid: pain, uncertainty, and constant work. Those are things you’re just gonna have to live with, no matter what. What will make you happy is the process. You have to learn how to love the process of dealing with those three things.”
One of the most important traits of design is to remain critical while aggressively applying technology in the context of what’s most valuable. AI tools won’t remove the life constants Stutz references but, if applied appropriately, they can elevate what we thought was possible. They aren’t — and shouldn’t be — immune from healthy scrutiny and skepticism. Always question the process behind what AI is supporting. This will allow human effort to keep a critical eye towards managing the level of reliance on this technology.
Pat Marsh is Principal Designer at argodesign, a global design firm designing beautiful, invisible solutions to some of technology’s most challenging problems. Over the past 18 years, he has designed digital products and services for both Fortune 500 companies and startups in consumer electronics, automotive, aviation, and healthcare, among many other industries.