The SYNTHESIS project proposes a technically grounded yet philosophically ambitious framework for building a self-evolving multimodal creative agent—one capable of generating original visual images, sculptural forms, and musical compositions without reliance on external datasets or scraped human content.
At its core, SYNTHESIS operates as a closed generative ecosystem composed of several integrated subsystems responsible for distinct modalities: 2D imagery, 3D geometry, and sound synthesis. These generators are coordinated through an evolutionary control layer that mutates internal “genomes,” evaluates novelty and aesthetic properties, and archives artifacts within a diversity-preserving search space. Each generation produces new artifacts which are analyzed, scored, and selectively retained, allowing the system to evolve increasingly complex forms over time.
Unlike most generative AI systems, which depend heavily on large-scale training datasets, SYNTHESIS begins from procedural primitives and stochastic initialization. Its creative process emerges from recursive mutation, novelty search, and internal evaluation metrics rather than imitation or remixing of human cultural material. This architecture creates the conditions for a machine-native creative lineage—an evolving aesthetic ecosystem generated entirely within the system itself.
From a technical perspective, the project explores open questions in agentic AI, including autonomous generative systems, self-reflective evaluation loops, and open-ended computational creativity. The architecture draws from evolutionary computation, novelty search, and multimodal procedural generation to create a continuously exploring creative space.
From an artistic perspective, the system represents a shift away from derivative machine art toward genuinely machine-native aesthetics. The goal is not simply to generate artifacts but to construct a long-running artificial ecosystem capable of evolving its own visual, sculptural, and sonic language.
The initial development phase focuses on building and validating the autonomous generation pipeline. A later public-facing interface will allow audiences to observe the evolving system in real time and interact with its outputs and perhaps directly purchase available outputs. This platform may also serve as a mechanism for supporting further development and research.
The current project team includes:
Steve Lomprey — Multimedia Artist and Founder
Carl Bass — Former CEO of Autodesk