BLOG


A botanical montage title graphic

Mind the Gap

From Generative AI to Integrated Visual Systems

Posted on March 27, 2026, by Peter Loomis




Introduction
1. Setting the Scene
2. Beyond the Prompt
3. So What?
4. Shifting Objectives
5. Systems of Meaning
6. Parallels
7. Obstacles
8. Trajectory
Conclusion


Introduction

I’ve been circling something recently, and it has to do with how we actually make use of the growing power of AI and the multitude of tools now at our disposal. Because to do this in a way that really means anything, I’m finding more and more that the results have to be more systemic than purely generative.

What I mean by this is that by themselves, while they may be fantastic, any single output from an AI—whether it be a chatbot or more of an agentic tool—can only do so much. But when connected, or integrated, within a larger system, the leverage really starts to take over.


Placeholder image
Closeup of a white tulip with sunlight from behind

One image. One moment. The question is, what comes next? (SDXL1.0/ComfyUI)

1. Setting the Scene

Let me paint the picture a bit more. Some of us have used AI image generators at this point, and they can output some truly fantastic stuff. Especially with the right prompt engineering, lighting, scenic, and referential cues, you can push an idea into what feels like hero territory.

I know I have. Since AI entered the market a few years ago, I've worked with many — from chat-based tools like ChatGPT and Gemini, to more image focused platforms like Leonardo and Playground. More recently, I've been working locally with tools like DiffusionBee and node-based ComfyUI.


Placeholder image
A white bowl of white and green spring wildflowers on a white table in the sunlight in a white room.

Control is earned, not given. (DALL-E/ChatGPT)

2. Beyond the Prompt

Beyond the “fantastic,” however, there is an art to rendering anything even approaching realism, or adhering to the basic laws of physics. There’s even an art to prompt engineering as mentioned above—understanding and describing the specific lighting, environmental and photographic cues well enough to generate a desired type of image, even by referencing well-known artists and stylistic movements.

Photorealism takes both effective descriptions and effective models and settings to render anything worthwhile. And even then, there’s a lot of randomness involved. You can dial in a look, but there’s always an element of luck. Move into something like ComfyUI and now you have more control—samplers, refiners, different models, more parameters to tweak and adjust. You can start to shape the output more intentionally to some degree.


Placeholder image
Closeup of a water droplet hanging from a blade of grass

To paraphrase Rumi "...not a drop in the ocean. [but] the entire ocean, in a drop." (DALL-E/ChatGPT)

3. So What?

It’s cool and all. But, so what? You’ve got an image. Maybe even a great one. Maybe something that looks photoreal enough to fool the eye.

Now what?

How do we take that one result and connect it to anything beyond itself?

Because on its own, even a strong image is still just a fragment. It can look finished, even convincing, but it doesn’t carry much weight. It has no memory, no trajectory, no real sense of belonging.

Which brings up a larger question that a friend asked me recently—if everything can be done now, then why do any of it at all?

Besides proving we can, I think the question has shifted. Because the question stops being about whether something can be created, and starts being about what it means.

So, the stakes are changing. We’re not talking about rendering something convincingly anymore. We’re talking about what happens after that, and what role the output actually plays.


Placeholder image
A bright, hummingbird suspended in sharp focus in front of a blurred flowery landscape

Always moving. Never just one flower. (SDXL1.0/ComfyUI)

4. Shifting Objectives

For me, the shift has been moving from generating outputs to building systems around them.

Instead of creating one image, it becomes about creating a set. A language. A direction.

In a recent example, I started working on a Spring-themed set of imagery—botanical elements, natural forms, building out a visual direction that could be used for branding, seasonal campaigns, or conceptual work.

On their own, each image is just that—an image. But when combined with others, paired with messaging, supported by additional elements like typography, logo placement, and context, they start to carry meaning beyond themselves. They start to belong.