Safe Products

What do you think of when you are told that a new product is “safe”? 

I suspect that most of us go right to physical safety. Has the product been tested to make sure it won’t cause physical harm? 

For example, autonomous driving is in its infancy. And it is not yet as safe as it needs to be.


 
Threads post by Kara Swisher. Two photos of a Tesla crashed into a store. Text reads: In honor of today's robotaxi rollout by Tesla, one of its cars - which the driver said was on autopilot - drove into my neighborhood store in SF this week.

Screenshot of journalist Kara Swisher’s Tesla crash post on Threads

 

On October 10th, tech-focused journalist Kara Swisher posted photos of a Tesla in self-driving mode that crashed into a store down the street from her house. Incidents like these, along with technical issues that bedevil autonomous taxis, have kept public opinion generally low. 

Another recent rollout is Meta’s Smart Glasses, made in collaboration with Ray-Ban. These glasses are “made smarter with Meta AI” and offer users the ability to ask questions of AI, take photos and videos, make calls, listen to music, and livestream.  

When you consider the safety of tech-enabled glasses, you might wonder: Will it hurt my eyes? Will the electromagnetic radiation be bad for my health? Will using the augmented glasses make it hard for me to walk or bike safely? 

But what Meta and Ray-Ban do not seem to have planned for is additional functionality that allows Smart Glasses users dangerous access to personal information about people they are looking at. 

Two Harvard students, AnhPhu Nguyen and Caine Ardayfaio, combined facial recognition software with Meta’s Smart Glasses. Using the glasses to look at strangers allowed the software to pull up their names, addresses, and other kinds of personal information. In real time. 


 

Still from AnhPhu Nguyen facial recognition demonstration video on X

 

A video posted by Nguyen on September 30 shows the modified glasses in action. Fellow riders on the subway are identified by name, workplace, publications, and social media posts. Nguyen and Ardayfaio walk up to them and claim to already know them, based on the information they’ve just gotten from the glasses. And they are believed. Why wouldn’t they be? 

On the one hand, this isn’t standard functionality for the Meta Smart Glasses. 

On the other hand, it took less than a year from the release of these glasses to the development of this doxing technology.   


When designing technology for the general public, it is important to take into account the needs and experiences of the different kinds of people who make up society. This includes accessibility needs, everyday use needs, and safety needs. 

The principles of inclusive language are also principles of inclusive design — and that’s because inclusive language is designed like a product, anticipating obstacles and variables and bypassing danger zones.  

Most relevant to Smart Glasses are Principle 4, Incorporate other perspectives, and Principle 6, Recognize pain points.  

And always relevant is Principle 1, Reflect reality


The reality is that the world is a lot less safe for some people than others. Some people can walk through the world without really fearing for their physical safety. Hotels, parking garages, nighttime streets — they are just places, not places where you’re on high alert. 

Other people face considerably more danger. For example, girls and women are always in danger of sexual assault and physical violence. And here in the US, public spaces are also less safe for Black people and perceptibly Muslim people and perceptibly LGBTQ+ people and people with stalkers and people trying to hide from violent domestic partners and many, many other kinds of people. 

The developers of Smart Glasses did not incorporate the perspectives of these people. And the developers of Smart Glasses did not recognize their pain points, did not take into consideration how painful it is to be in danger of assault — and worse, how painful it is to actually be assaulted. For some people, painful to the point of death.  

By not building in protections against doxing, the developers of the Smart Glasses have created a product that is not safe. Not in the sense of “will these glasses cause physical harm to my body when I use them?” But in the sense of “will other people be harmed by their use?” 

Tech is famous for not taking into consideration the needs and user experiences of many members of the general public. For example, social media platforms like Instagram and ex-Twitter are famously unsafe for teens, female people, people of color, LGBTQ+ people, and more.  

Having clarity and being more precise about what it means for a product to be “safe” is an important component of responsible and conscientious product development. 


Worthwhile guides organizations of all sizes to more modern, strategic, and inclusive language.


Copyright 2024 © Worthwhile Research & Consulting

ArticlesSuzanne Wertheim