This article was originally posted on trust.guidestar.org and is reprinted here with permission.
Imagine your best friend Holly is in a horrible car accident and tragically dies. As grief overwhelms you and tears run down your cheeks, you pull out your smartphone. You send Holly a text telling her how much you miss her. Within seconds she responds with a funny cat meme—just like she always did. You start to laugh. After many more back and forth texts it feels like Holly is still alive. Or is she.
Chatbots are computer programs designed to simulate conversation with people and minimize human intervention in delivering information. The “Age of Automation” will impact every area of our lives including civil society. As of April 2017, 100,000 Facebook messenger bots have reached more than 2 billion users.
Opportunities to converse with chatbots are growing, whether through our smartphones, tablets, home appliances, virtual personal assistants, or our cars. In fact, by 2020, according to Gartner analysts, “The average person will have more conversations with bots than with their spouse.”
We will have nonstop interactions with both functional bots designed to provide information and others designed to offer companionship. In other words, that long-awaited science fiction moment has arrived—the bots are here.
Replika is a chatbot that collects data about your moods, preferences, and patterns of speech until it starts to feel like you are texting with your alter ego or a “replica” of yourself. Eugenia Kuyda built Replika to memorialize her friend who died in an accident in 2015. The chatbot synthesized thousands of messaging conversations until eventually it could reply in a way that sounded convincingly like Kuyda’s companion.
Replika bot can be therapeutic, help you keep a journal, and even be entertaining. However, it isn’t even a dutiful virtual assistant like Julie, who can schedule your meetings. Replika doesn’t really have a purpose. It’s just a virtual friend, one step up from a pet rock because it can chat with you. And it gives us a glimpse of what our future relationships with AI tools may become.
Not surprisingly, enthusiastic techtopians are pronouncing all the wonderful ways that robots are going to help us. However, a fuller examination of the darker side of the bots is also required because they are going to change our relationship with society and with one another. And anything that affects our humanity and our relationships has profound implications for the nonprofit sector.
Some leading-edge organizations are already putting the bots to work. For instance, the San Francisco Museum of Art has 34,678 items in its collection. A patron would have to walk 121 miles to see them all! To share its entire collection with art lovers, the museum created the “Send Me” bot, which allows anyone to send a simple text message and receive a picture of a piece of art matching the idea, words, or phrase texted. The bot unlocks all of the artwork of the museum for virtual viewing by anyone, anywhere, at any time.
Humanitarian organizations are actively researching and testing how messaging and bots can help refugees or those directly impacted by a natural disaster, according to recent report by the International Federation of Red Cross and Red Crescent Societies. The World Food Program developed and tested the “FoodBot,” a Facebook Messenger bot, to interact with the people they serve by providing information on WFP services, food prices, weather updates, nutrition, and disease prevention. UNICEF created its own bot, U-Report,to engage young people on a variety of issues. The bot, available via Twitter and Facebook Messenger, polls its followers (called “U-Reporters”) on a range of topics and uses the data to help influence public policy. UNICEF’s bot has had some early successes. For example, in Liberia, the bot asked 13,000 young people if teachers at their schools were exchanging grades for sex. Some 86 percent said yes, uncovering a widespread problem and prompting Liberia’s minister of education to work with UNICEF on addressing it.
However, the bots aren’t just technical tools for nonprofits. They also present significant ethical problems. Amazon and Netflix use algorithms to manipulate our choices of books and movies. Facebook manipulates what we see on the site to keep us there longer. Add to this the fact that research has shown the use of mobile phones and social media make us feel isolated, and the possibility arises that bots programmed with AI may be able to manipulate our emotions in unhealthy ways.
Take Yeshi, a virtual reality image of a young girl in Ethiopia created by charity:water to educate people about the six-hour walk that many girls and women make to get clean water. The conversation with Yeshi is “smart,” meaning that she asks and answers questions with a variety of images, maps, text, and videos. But what if Yeshi was “smart” in another way? What if she was programmed to manipulate people’s emotions to encourage them to donate even more money? Where is the line between cultivation and manipulation—and who determines it?
Another area of ethical grayness is maintaining people-centered practices and policies. For instance, which jobs will be acceptable to outsource to bots and which ones, like, say, social worker, should never be substituted by even millions of lines of code? This is a real and current concern. Woebot, a therapy chatbot, engages in 2 million conversations a week and has been shown to reduce the symptoms of anxiety and depression. However, some experts have raised privacy concerns, as robot to human conversations are not covered by doctor-patient confidentiality laws.
The ramifications of not paying attention to the societal effects of social media are being felt right now. Not enough attention was paid to the possibility of manipulation by foreign agents using platforms like Facebook. Nor did we make it a public policy priority to understand the ramifications of private companies incentivized to sell our data to advertisers and control what we see. These are important problems that require ongoing attention. The impact of the bots on employment in the near future, however, is a much bigger societal problem.
The Age of Automation will be a technological marvel for many people. It will also be an economic and social disaster for those who lose their jobs in retail, driving, legal, finance, and healthcare. Putting the pieces of people’s lives back together is what the nonprofit sector does. Our people and organizations ensure that the most vulnerable people, the homeless, hungry, broken, and sick, are cared for and not forgotten. Our sector is also where nature is preserved, art is made, and education occurs. In other words, we are the soul of society.
Given the enormous challenges ahead, we want to provide specific steps for people and organizations to thrive in the Age of Automation. They include:
Understand the Adoption Trends
You don’t have to be an expert in artificial intelligence or know chatbot programming code, but you do need to understand what a chatbot is and at a high level how it works. More importantly, you need to understand the current usage trends. Luckily, the ICRC, together with The Engine Room and Block Party, has produced a useful report on the current and potential uses of messaging apps such as Facebook Messenger in humanitarian situations.
Get Some Hands-On Experience with Chatbots Developed for Social Good Purposes
Visit the different chatbots referenced in this article or use this curated list that Beth put together of examples from nonprofits and beyond. Try to determine the purpose and intended audience. Does the chatbot use open-ended conversation or is it close-ended? Is it complex or simple? Is it a pleasant or frustrating user experience?
Design a Simple Pilot
In our book, The Networked Nonprofit, we wrote that using new technologies is a contact sport, not a spectator sport. It’s time to get in the sandbox and try it out for your own organization. Start by determining a measurable objective. Do you want it to assist marketing in building your email list or delivery of services? Next, figure out who the intended audience might be. It will be very helpful to come up with one or two user personas and sketch out some potential conversation threads. Also determine the cost. Do you have a budget to hire a Chatbot programmer (you’d need this for a more elaborate chatbot that uses AI), or will you use one of the free and low-cost chatbot authoring tools, like Octiveai, Manychat, or Chatfuel. (You can read more about designing a pilot here.)
Evaluate and Iterate
Run your pilot for a few months. Gather data against your goals. You could also survey or interview some of the people who interacted with your bot and get their feedback. Based on this initial feedback, how might you improve your bot’s results? Is it ready to scale?
We know from experience that nonprofits will too often lean back away from rather than into new technologies. We cannot afford to be passive observers of bots because we are going to need to learn how to control them—or they are going to control us (or kill us!). In the age of automation, the responsibility for lifting up the fallen, appealing to and activating our better angels, and preserving our humanity falls to the nonprofit sector. It’s time to get to work.