Recently, Harvard Business Review published an article by Michael Shrage touting the virtues—and dangers—of “algorithmic autonomy” in corporate settings. I read it because I wanted to know more about what “algorithmic autonomy” means and why HBR is paying attention to it.
According to Shrage, the debate about its appropriate use is here and now, front and center, at many well-known organizations. Here’s what he says about it:
Charismatic CEOs enjoy leading and inspiring people, so they don’t like delegating critical business decisions to smart algorithms. Who wants clever code bossing them around? But that future’s already arrived. At some of the world’s most successful enterprises — Google, Netflix, Amazon, Alibaba, Facebook — autonomous algorithms, not talented managers, increasingly get the last word. Elite MBAs (Management By Algorithm) are the new normal. (1)
The article discusses four different ways to handle the integration of smart algorithms into an organization’s decision-making process, ranging from the autonomous/autonomy adviser, characterized by McKinsey, Bain, and BCG, to all-in autonomy, exemplified by several (unnamed) investment firms.
The purpose of this blog is not to provide a detailed critique of the HBR story, so I won’t delve deeply into all four models for decision-making that the author explores. Instead, I would like to use the basic thesis of the article to think about where our product, First Draft, fits into the paradigm of autonomous algorithms.
At first glance, I assumed that we are most aligned with the autonomous/autonomy adviser approach, described in the following way:
Autonomous algorithms are seen and treated as the best strategic advisors you’ll ever have, but they’re ones that’ll never go away. They’re constantly driving data-driven reviews and making recommendations. They both take initiative on what to analyze and brief top management with what they find. But only the human oversight committee approves what gets “autonomized” and how it is implemented. (2)
First Draft is in many ways a lot like this description. Our product is always working on behalf of the fundraiser, not only reporting on what it finds, but also giving prescriptive advice on what to do next ("Send this email to this prospect now!").
However, the last sentence highlights where our approach differs. In fact, we don’t believe that algorithms should, at this time, be autonomized. We don’t want our software sending emails directly to prospects and donors, and our clients don’t, either-- they want (and need!) to have a say in what ultimately gets sent because they are the relationship-builders. As our team has said in recent posts, we believe the best form of AI is “embedded” or “invisible” AI. The user (fundraiser or researcher) doesn’t have a conscious thought that “I’m now using AI.” They’re focused on doing their jobs and are happy that their software is so helpful and unobtrusive in getting their work done.
Therefore, there is no “oversight committee” deciding “what gets autonomized and how it is implemented” in First Draft; there is just the individual fundraiser working with First Draft to cultivate and solicit more donors for his or her organization. And then there is us at Gravyty, constantly monitoring how our software is performing and discussing its use cases with our clients. We make improvements and modifications based on their experience with the embedded AI so that they never have to worry about it serving their needs as relationship managers.
So, I think it’s useful to propose an alternative category to algorithmic autonomy and describe our AI as “algorithmic symbiosis.” In this model, the fundraiser and the machine work symbiotically to achieve the organization’s goals—with minimal friction. Incidentally, the Merriam-Webster Online Dictionary defines symbiosis as “the living together in more or less intimate association or close union of two dissimilar organisms.” Artificial and human intelligences are clearly dissimilar entities, and we believe they should remain that way while working in harmony to allow the fundraiser/researcher to focus on building and using the skills that make them entirely human.
While the HBR article seems to imply that clashes over degrees of algorithmic autonomy are just part of the price of doing business in the age of AI, I argue that it doesn’t have to be. We have the ability to eliminate that conflict entirely (and already have at Gravyty!) so that users never have to worry about Gravyty’s AI doing the work that the human should do.
See how much more you can achieve as a fundraiser or researcher with “algorithmic symbiosis” and First Draft!