
Just when you think we've seen every wild experiment that can be done with AI image generators , along comes another one to top the last. A YouTuber wanted to test whether generative AI can replicate the effects of psychedelic drugs, and the ambitious experiment mushrooomed (sorry) beyond what he imagined.
He went to trouble of training his own model and overcame various challenges along the way to finally produce results that are fairly convincing. AI may not be able to think, but it seems it may be possible to it to generate something like experiences that the human mind can go through.
In the video above, Gal Lahat explains how he set out to explore parallels between the human brain and artificial neural networks by trying to simulate the effect of psychedelics on AI. He begins with a problem: the brain's recollections of images are based on meaning not pixel values, and it's this that affects the distortions humans see after taking psychedelics. How could that be simulated in an AI that doesn't understand meaning?
His first idea is to use OpenAI's CLIP to teach an autoencoder to see the world in a more semantic way – with a little help from open-source SDXL to make training an AI model with large images viable on his computer. He then tried distorting the model parameters to see if the model could produce things that looked psychedelic, and he experimented with injecting noise into the latent space to simulate "drifting inaccuracies" that might happen in the brain when taking psychedelics.
He showed the results to people with experience of psychedelics, and they were impressed by how much the output resembled things they had seen. It's an intriguing experiment. Should we be surprised? AI imagery has long been compared to the stuff of hallucinations and nightmares. .This no gives a new meaning to the concept of AI hallucinations.
AI infinite zoom
Gal has carried out several other interesting experiments in AI image generation, including AI-generated infinite zoom. We know that AI image generators can expand images outwards. Outpainting as DALL-E calls it, or Generative Expand in Photoshop, can 'uncrop' an image, adding new detail outside the frame that fits the look of the rest of the image. But I'd never considered whether AI could also add depth to an image.
Gal explored that by upscaling images and then zooming in on them to beyond the point where you would expect the image to break down into pixels. He's unable to control where he zooms in, so sometimes the image just becomes boring if it zooms into an area of plain colour. And testing the idea on images of space reveals... well, just more space.
But some urban scenes reveal surprising (or just plain weird) details generated by the AI in unexpected places in the image. It reminds me of an artist's infinite zoom art that we wrote about recently.