Are you getting enticed by the dark and the beautiful? Music that is both bombastic and ambient? Well then there’s a good chance you will enjoy the generative arts produced by Kristina and Aleksandr create modern generative art and innovative tools that raise the bar on the synergistic possibilities of visuals and sound. Since meeting at Moscow’s Mars Contemporary Art Centre in 2016, they’ve collaborated on a slew of immersive affairs, always up for the challenge of conjuring new things—modular music, generative visuals, and TouchDesigner tools.

“The most important component to any intelligence in this field is how it integrates all aspects,” says of Deep Echo, which is currently up for auction. “We use all the tools to build a human learning algorithm that is capable of predicting the best possible output, even in the absence of significant performance gains.”

“The key components of this system are a basic understanding of real-life data we used to train our natural neural networks. This is based on real-time data that we have accumulated over time, to be able to predict for example what happens when the world is a lot different.  To find out what exactly the algorithm actually thinks about, we need to take advantage of the data that was previously collected and test it.”

By creating mesmerizing digital matter of frighteningly porous frontiers exclusively through TouchDesigner and modular gear, they push back the limits of footage and sample-free language that is opulent and breathtakingly singular. Taking as starting points their most irrepressible fascinations with death, the unknown and the cosmos, they craft thrilling, precise, painterly code-art that broaches big philosophical questions and provides mesmerizing though highly speculative answers.

“An idea usually comes out of nowhere, sometimes it’s a fix for a certain problem or a part of technology we’re looking into at the moment. But it always has to be something really interesting. Sometimes we think up a project, sketch and storyboard it, but have no wherewithal to complete it and massive installations are hard to sell in Russia.”

The sound is based on electrical signals coming from SOMA Laboratory’s LYRA-8 and PULSAR-23 synthesisers. “Changing signal parameters and routing we generated a source for training our natural neural networks,” the duo says. “This also makes it possible for the system to be sensitive to what’s seen. Meanwhile, the visual content is highly focused on details of what we listen to during the training phase.”

“For the generation, we prepared a specific pipeline to transform signals from the trained networks into the audiovisual piece presented above. The generation process is highly optimized to generate all data in real-time.”

In the below piece, which is currently being auctioned on NFT marketplace SuperRare, the duo construct a ghostly digital simulacrum of David’s head, while the sound is generated with modular synthesisers in real-time. The work poses interesting questions around authenticity in the age of NFTs, which exist digitally but provide proof-of-ownership.