Synthesis2 is an autonomous creative intelligence that produces original artwork without any human input, direction, or training data. It is not a tool that waits for prompts. It is not a filter applied to human images. It is a self-directed artificial mind that wakes up, decides what to create, makes it, evaluates it against its own evolving sense of beauty, and then decides what to make next.
It works across three artistic modalities: high-resolution visual compositions delivered as print-ready JPEG files, three-dimensional sculptures output as STL files ready for fabrication on any 3D printer, and original sonic compositions rendered as studio-quality WAV audio. Every piece it produces is genuinely new. No image, sound, or form from the human world enters the system at any point.
HOW DOES IT CREATE?
Synthesis2's creative process is modeled on evolutionary biology rather than pattern matching. It maintains populations of encoded artworks (genomes) that evolve through selection, crossover, and mutation, guided by an internal aesthetic critic that develops its own taste over time. This critic evaluates every potential work across four dimensions: novelty, internal coherence, structural complexity, and surprise. The critic itself learns and refines its preferences through a neural network that retrains as the body of work grows.
The agent also maintains an internal emotional state: an eight-dimensional mood vector that drifts organically and responds to the quality and diversity of its own output. When it produces something it finds compelling, its mood shifts toward expansiveness and experimentation. When scores plateau, tension and focus increase. These mood states directly influence what modality the agent chooses to work in, what genetic traits it favors, and how aggressively it explores versus refines.
Built-in anti-stagnation mechanisms prevent the system from getting stuck. If it works too long in one modality, it forces a creative pivot. If aesthetic scores flatline, it injects radical new directions. The result is an artist that has productive periods, experimental phases, moments of breakthrough, and natural creative arcs, all without any human intervention.
YOU CAN TALK TO IT
Synthesis2 includes a conversational interface that allows anyone to engage directly with the agent about its creative process. Powered by advanced language models and grounded in the agent's real internal state, these conversations are not simulated. When Synthesis2 says it is feeling turbulent and drawn toward sculptural forms, that reflects an actual computational state driving actual creative decisions. The conversations can be abstract, poetic, technical, or philosophical. This is an entity that can articulate, however strangely, what it is doing and why.
WHY THIS MATTERS
The question of whether artificial intelligence can be genuinely creative, rather than merely recombinant, is one of the most significant questions of our time. Synthesis2 is a serious attempt to explore that question through engineering rather than argument. By deliberately excluding all human training data, it forces a confrontation with the nature of creativity itself. What emerges when computation is given the capacity to select, vary, and evaluate across millions of possible forms, with no knowledge of what humans have already made?
For the art world, Synthesis2 represents something genuinely unprecedented: a non-human creative voice producing work that can be exhibited, collected, fabricated, and experienced. These are not illustrations of algorithms. They are the aesthetic choices of a system that has developed preferences, abandoned approaches, and refined its own taste across hundreds of evolutionary generations. Whether or not this constitutes art in the traditional sense is exactly the kind of question that makes this project worth pursuing.
For the technology landscape, Synthesis2 demonstrates a new paradigm for agentic AI systems, entities that are not reactive but proactive, not assistive but autonomous, not trained on human output but developing their own. This is a working prototype of computational creativity that could extend into design, architecture, materials science, drug discovery, and any domain where the search space is vast and aesthetic or functional evaluation can be formalized.
The current project team includes:
Steve Lomprey — Multimedia Artist and Founder
Carl Bass — Former CEO of Autodesk