News that Italy is pointing AI-enhanced cameras at museum visitors has caused a flurry of despair on surveillance ethics Twitter. Here’s what’s happening: The Italian National Agency for New Technologies, Energy, and Sustainable Economic Development (ENEA) is placing cameras next to certain pieces of art at the Istituzione Bologna Musei. ENEA wants to know how long visitors spend looking at the art and where they focus their attention. Researchers are looking for “attraction value” that might help museum curators optimize gallery layouts, lighting, and exhibit schedules.
At long last, while you are studying a piece of art, the museum can study you.
Ok, but, wait. In fairness, studying the behavior of museum visitors is not new. Visitor studies is an established profession, complete with its very own academic journal. But until now, humans have been the ones observing (and often interviewing) museum visitors. My question today is not whether studying museum visitors is ethical, because I think it is. Organizations across industries use perfectly ethical tools (surveys, interviews, secret shoppers, etc.) to better understand their customers’ needs. Instead, I am curious about whether using AI cameras is ethically different from having humans do the same task.
Surveillance Ethics, and the Myth of Privacy
Whenever someone points a camera at you, you lose your privacy. You lose your ability to exclude yourself from the view of others. “Meh,” you may say. “By that definition, I lose my privacy whenever I go out in public.” That’s true! What has changed is the scale at which cameras have compromised our privacy. From cellphones to security cameras to satellites, cameras are everywhere. Where we were once only visible to people in our immediate social circles, we are now potentially visible to billions. And as many a viral video victim can attest, all this recording can happen without our knowledge or consent.
And yet, privacy is still a big part of the “big data” debate in tech. People give up a lot of personal data in order to get a free service in return. Yet, we also want some anonymity out of the deal. The ENEA claims their system uses “simple data elaboration” to turn a visitor’s likeness and behavior into a graphic. This, they say, protects visitors’ privacy. This may be enough for a lot of people. But if you paid for your ticket with a credit card, the museum knows you were there. And the security cameras in all of the exhibition rooms are not de-identifying you. Whether human or AI, researchers know who you are, or can find out with minimal effort. Privacy in this context operates on the honor system.
We Are All Different
ENEA is also collecting data on gender, age, and mood. Even if the researchers never see your face, they are still documenting fairly specific information about you. This, of course, is true if human beings are the ones collecting the data rather than a camera. In either case, this information is likely wrong.
We now know that the expression on your face has little to do with your emotional state. And in museums that draw large international audiences, like those in Italy, how can researchers be sure they are drawing accurate conclusions when not all cultures express themselves in the same way?
Research Surveillance as the New Price of Entry
I also wonder whether visitors have a choice about being a research subject. Let’s say the museum limits their camera traps to a single room. They’ve also put up plenty of signs alerting visitors to the study. This seems like a good compromise. If you don’t want to be a research subject you just don’t go into that room. But what if the artwork you really, really want to see is in the surveillance room? You have traveled far and paid the museum entrance fee, only to discover you must now submit to being studied.
If the researchers were doing this work in person, a visitor could decline to participate and still have full access to the museum. Will agreeing to being a research subject be the new price of entry to see art?
Engagement ≠ Quality
Just because people are engaging with something does not mean that thing is of high quality. This may be less of an issue in an art gallery, where every piece is curated by experts. Yet, art appreciation is subjective. What I love (impressionism) may be very different from what you love (realism, say). If this system identifies a piece of art that a lot of visitors look at, do we elevate that piece at the expense of masterpieces with less “attraction value”?
On the other hand, what if this research does result in a better experience for visitors? What if knowing where visitors look – or more importantly where they don’t look – can help a museum elevate overlooked artists? Curators can still make the popular pieces available, while also ensuring visitors notice other worthy pieces. The system has already identified that moving a statue of Apollo of Veii may result in more people appreciating it. The statue had been placed at the end of long exhibit. By the time visitors got to it, they were too fatigued to bother looking. I will posit that human beings could have discovered the same thing with out the cameras.
Surveillance Ethics and the ShareArt System
Technology only does what it is designed to do. The ShareArt cameras are not designed for art appreciation. They are designed to record what happens in front of them. This technology could easily make its way into supermarkets or clothing stores to help companies optimize their sales floors. Or in cars so that vehicle manufacturers can study driving behavior. At what point is the monitoring invasive or harmful to your wellbeing?
In the end, the ethics of surveillance changes with every new use. In this case, we have two kinds of cameras (security cameras and ShareArt cameras) pointed at visitors for two different reasons.
Let’s ask ourselves three things:
- Are the cameras necessary? The argument in favor of security cameras is that they help keep the art and the museum patrons safe. If someone vandalizes or steals art, it it better for all of us if we can identify who did it. Additionally, security cameras are watching everyone in the museum. The ShareArt cameras are watching specific people in specific contexts. The safety of the patrons is not at stake, nor the art.
- Is there a feasible alternative? The feasible alternative in both cases is using humans. Museums do often double up on humans and cameras for safety. After all, a camera can not tell a visitor not to touch the art. It is possible to do the same for the research study, but it is not immediately clear to me why that would be advantageous.
- Can I refuse the surveillance and still do the thing I am here to do? In both cases, no.
Cameras aimed at visitors for the purpose of studying their behavior is indeed ethically different. I encourage the ENEA to reconsider the use of the ShareArt cameras in museums. They should rely instead on the wealth of expertise in the visitor studies profession. We should limit the monitoring to security purposes and allow visitors to appreciate art on their own terms.