O Google adopted, in Brazil, a policy to combat harassment and gender violence in its virtual personal assistance tool, the Google Assistant. As in other countries, whenever the user asks aggressive questions or uses offensive terms, the device will respond accordingly. Depending on the degree of offensiveness, the response may be humorous, instructive, or peremptory. “Respect is fundamental in all relationships, including ours” and “Don’t talk to me like that” are two examples that may be heard more. The phrase used earlier, “Sorry, I didn’t understand what you said,” will be heard in different contexts.
According to Google, about 2% of personal questions asked to Google Assistant are messages that use abusive or inappropriate terms. One in six insults are directed at women, whether for expressions of misogyny or sexual harassment. In the red voice, with a female tone, comments or questions about physical appearance are more common. The orange, male-toned voice receives a large number of homophobic comments – nearly one in ten offenses recorded.
Over here, the robotic voice won’t use “please” when saying phrases like “Don’t talk to me like that”. According to Maia Mau, an executive in Google’s marketing department, this decision is based on the premise that it is not a favor to treat others with respect. “That’s why we wanted to reinforce this aspect”, she said, during the presentation of the initiative at the company’s headquarters, in São Paulo.
In Brazil, the updating of responses went through a process of review and adaptation to assess the meaning that certain words or expressions can convey. For example, the word “dog”, which is also used as an offense. Another challenge was differentiating terms that can have different meanings depending on the context. If a person uses the word “faggot” instead of “gay” or “homosexual”, the robot will warn that it may be offensive.
In Latin America, Brazil was the first country to go through this process. Soon after, other regions will also adopt the same policies. “We are already working on surveying the expressions most heard by the assistants in each of the markets,” Maia told VEJA. For now, the process is “manual”, but the expectation is to apply artificial intelligence to the process to identify more offensive terms and target specific responses to them.
The construction of this new Google Assistant policy began in 2019, inspired by the report I’d Blush if I Could, produced by the United Nations Educational, Scientific and Cultural Organization (Unesco). The document brought a warning about how the use of female voices and the submissive behavior of virtual assistants would help to perpetuate gender bias. The first phase of the project began to be implemented last year, in the United States, and prioritized the creation of responses to the most frequently reported abuses – in this case, offenses and the use of inappropriate terms aimed at women. Then, responses to racial and homophobic abuses were also launched.
Attending the event remotely, Arpita Kumar, content strategist for Google Assistant, said that over the course of testing, a 6% growth in positive rejoinders was observed. People who, after receiving more incisive responses against offenses, began to apologize or ask “why”? “The positive rejoinders were also a big sign that people wanted to better understand why Assistant was pushing certain types of conversations away. The sequences of these conversations became gateways to delving into topics like consent,” she said.
Continues after advertising
Copyright © Abril Mídia S A. All rights reserved.
Quality and reliable information, just one click away. Subscribe SEE.