AMT Lab @ CMU

View Original

Artificial Intelligence and The Museum Space

Today, artificial intelligence is one of the hottest fields—not only for big companies such as Google and Facebook, but also for startups and research labs around the world. In the complicated and diverse field of AI, the specific disciplines of machine learning, deep learning, computer vision and pattern recognition are regularly mentioned. When these AI disciplines meet museums, what data can they utilize and what tasks can they complete?

Machine learning: Analyzing collections

As early as 2008, AI expert Eric Postma answered questions about how computer algorithms could be used as new analysis tools to detect the authentication of artworks. This would provide an alternative to the traditional technological methods such as the detecting chemical properties of the pigments. The algorithms are built upon “a sufficiently large set of paintings” of the Van Gogh Museum and obtained evidence throughout the analyzing process to observe key visual properties such as the brushstroke.

In simple terms, machine learning is a type of artificial intelligence that learns from data to find patterns in it and make predictions about it. The data can be number, image, voice or any kind of information. Examples most people are familiar with are Google’s search algorithm and Apple’s Siri. The recent example of “Computer paints Rembrandt” is another application of the same principle.

One of the basic functions of machine learning is classification – classify raw data into pre-defined categories. A very simple example is using Google’s TensorFlow to classify an image you upload to match the 1000 categories on ImageNet, an image database of Stanford University. Using the same idea, Babak Saleh and Cos at Rutgers University in New Jersey classified paintings and found connections between them.

Computer vision: body-based group tracking

At the 2016 Museums and the Web conference, Randy Illum from UCLA presented the OpenPTrack project. The basic idea of the project is to enable human bodies as an interactive medium either for artists to create artworks or for museum curators to design programs, especially among a large group of people. For example, a group of visitors can play soccer via an interactive screen simultaneously, using their real actions, but with a virtual soccer on the screen. No VR gadget is needed for the immersive experience as all the real actions are captured by sensors and cameras and projected to the screen. As the image below demonstrates, the movement of the students are tracked by cameras and infrared detectors so that the cartoon figures will interact accordingly.  OpenPTrack is an open-source technology platform. Researchers look for “new uses of this open source system by others and provoke dialogue on the role of body-based interactive content in the future of informal learning spaces within museums and other cultural institutions.”

The technology that supports the project is computer vision, also a form of artificial intelligence. Microsoft defines it as “to make computers efficiently perceive, process, and understand visual data such as images and videos.” According to the developers, computer vision makes real-time person tracking possible by improving multi-modal image and point cloud processing. It also solves the problem of stability and responsiveness “necessary to drive interactive applications” at low cost. 

The data gathered from cameras and infrared detectors can also be used to visualize the museum traffic when combined with CAD drawings. It will allow the managers of the museum to understand what attracts the visitors most (see this video for a demonstration).

Deep Learning: Filling in Filmic Gaps

In 2015, Google turned existing and synthesized images into smooth running film for Borghese Gallery and Museum, Acropolis Museum and other random street views in its DeepStereo project. The key point here is not using still images to make a film, but making the film run smoothly with the limited images. What Google did was use deep learning to learn from the only two images available and then to synthesize a third one so that the three can be made into a film. Deep learning solves the problem of missing frames when filming with images.

Deep learning is a form of machine learning, “loosely modeled on the way neurons and synapses in the brain change as they are exposed to new input.” Recently, Google developed a new function that makes its street view products even more fascinating: determine the location of almost any image.

 

Museums, as significant public spaces with advanced facilities and a lot of visitors, are desirable labs for researchers around the world. Museum collections also contribute to the data pool that many data analysts are eager to explore. The technology world is always keeping an eye on museums. As IBM’s researchers are making the machine-learning computer Watson a knowledgeable tour guide in India, how will museums embrace the future with artificial intelligence?