ARCHIVE DREAMING

Created:
2017
Client:
Refik Anadol Studio

In 2017 Refik Anadol's Archive Dreaming was covered all over the art world and related press. I am really happy that an artwork that I realized for a large part was actually so successful.
It was exhibited in SALT Gallery in Istanbul, then at Ars Electronica Festival 2017. It had an occurrence at Siggraph 2018 presentation and in 2019 it was installed in Beijing and won an award.

Many things are special about this piece. The exhibition space itself is a cylindrical space with 360° projection with 4 projectors. Floor and sealing of the space are out of mirrors, so it creates the intense impression of an endless tunnel. The user navigates through the exhibit and the immersive space using a windows surface tablet. There are several modes, some of which play back videos produced by Refik Anadol's Studio and one in which you can fly through and explore the vast image archive of the SALT Gallery in Istanbul. 

 

Behind the Scenes

Disclaimer: it is getting very technically now!

1.2 Million Images have been classified and sorted into a 3d cloud using the tSNE dimension reduction method. I received that data and was tasked with the realization of the exhibit, the interaction, the projection mapping, software logic and the implementation of the image cloud.

I could not find much about techniques for visualizing 1.2 Million images in real-time. To my knowledge it was not done before. And for a good reason, the hardware was not available before for that task. Just the latest NVIDA GPUs would have enough memory to perform the task.

I also had a rough idea about how to implement it based on knowledge I gathered previously by reading papers about how game engines manage vegetation rendering. It was obvious that a similar Level of Detail approach would be needed and a clearly defined way of how to structure the data.

So I prepared texture atlases with tiny versions of the images. Then I wanted to upload those atlases as textures arrays to the GPU. I noticed that this would crash the driver as it would be too many. However I figured out that I could upload multiple smaller texture arrays step by step and bind them to different texture array variables in the shader. 
So the cloud worked with 1.2 million 32x32 pixel images.

When an user goes closer to images the high-resolution version of it is loaded dynamically. For this I use a compute shader to extract the image Ids close to the viewer.

There was not as much time as I would have liked to be there for additional optimizations and visual improvements. But I later I had the chance to improve the image cloud more and obviously people liked it anyway.