About
I'm Russell. Artist. System builder. Visual critic. I draw, code, measure, and build frameworks that bridge art theory and computational logic. Combined, I build things that influence latent space, tools that measure and apply "structural influence" to the frame.
This site documents the Visual Thinking Lens, a measurement and critique system for compositional reasoning in AI-generated images. It doesn't optimize or enhance. It interrogates. It asks whether an alternative state should exist in place of the default.
What VTL does:
VTL critiques images not by how they look, but by how they hold under structural questioning. It measures geometry AI learns to repeat: placement offset (Δx,y), void ratio (rᵥ), packing density (ρᵣ), cohesion (μ), peripheral pull (xₚ), orientation (θ), structural thickness (dₛ).
These aren't aesthetic judgments. They're coordinates. Stable, reproducible, invisible to semantic evaluation.
The system operates through constraint architecture, applying pressure to compositional defaults so alternatives become visible. It works across platforms: Sora, MidJourney, GPT, SDXL, Firefly, OpenArt.
The work:
This site is part research documentation, part working sketchbook and Metrology for latent space. You'll find empirical studies showing compositional monoculture across thousands of images, measurement protocols, Jupyter notebooks, case studies, and practical applications.
Some of it is polished. Some of it documents the process of figuring things out. That's intentional. The work is transparent: methodology, code, measurements, dead ends included.
Background:
I've worked in advertising for 20+ years (R/GA, Digitas, Publicis) developing marketing strategy for global brands. I've exhibited work in Brooklyn and NYC galleries. I read Arnheim and Dunning, work with Python and computer vision, and approach composition as measurable physics.
I'm self-taught in code (started with Dreamweaver and Flash, progressed to building measurement infrastructure). I combine 25 years of figure drawing, painting and photography practice with systematic evaluation frameworks.
This work is interdisciplinary by necessity, it sits at the intersection of art theory, computer vision, and generative AI evaluation. It requires all three.
Consequence:
Current benchmarks measure semantic correctness. They're blind to compositional geometry. VTL fills that gap.
AI generates semantic infinity with geometric poverty. Models compress 75% of compositional space despite producing infinite subjects. This matters for model evaluation, training assessment, and anyone working with AI-generated imagery who wants authorship instead of defaults.
This isn't about making better images. I generally love all imagery. AI creates magnificent images. It's about maintaining the ability to ask for one that doesn't exist yet.
Get started:
If you want a starting place, begin with the kernel primitives: Δx, rᵥ, ρᵣ, μ, xₚ, θ, dₛ or how we should be reading mass, not objects.
Read the monoculture research. Run the notebooks. Test the framework. The code is on GitHub. The documentation is here.
If you're interested in collaboration, research applications, cross-platform validation, practical implementations, contact me. Thinkers and builders welcome.
What this site offers:
Not mood boards. Not aesthetic optimization. A way into compositional authorship through measurement, constraint, and systematic interrogation of what AI learns to repeat.
The work holds value in the constraints it applies, the refusal to let image-making dissolve into statistical priors without friction.
Contact
Interested in working together? Curious? Reach out.