Investing in artificial intelligence has hit a huge stride. In 2020, $28 billion flowed into AI startups from eager high-tech venture capitalists. By 2021, that number had more than doubled to a record-high $59 billion, and there is little indication that interest in AI innovations will slow down in the coming years.
What makes investing in AI so attractive is the recent profitability of backing AI-entrenched products and brands. From self-driving cars to smart robots, AI innovations in all sectors — including healthcare, manufacturing, and logistical management — are changing how the world works and lives. As more people use these innovations, they become increasingly valuable and sought after.
In fintech, for instance, one AI innovation that’s long been transformative is the robo-advisor. Many top financial firms offer robo-advisors to clients — including high net worth ones — eagerly seeking tech-enabled advice on building diversified portfolios based on their risk profiles, appetites, preferences for algorithmic trading, and a host of other inputs. Deloitte estimated that up to $3.7 trillion in assets were managed at least in part by robo-advisors in 2020. By 2025, that figure could exceed $16 trillion.
Another trending example is AI as a driving force in the venture capital industry. It won’t be long before AI innovations not only review companies, but also acquire them outright, minimizing the need for traditional human interventions. This has already been tested in the real estate investing industry with Zillow’s latest algorithm. The company’s project ultimately failed, but make no mistake; its failure only paves the way for the next, more advanced generation platform to take shape.
The news about AI is exciting from a 35,000-foot view. Upon closer inspection, however, it is simultaneously filled with a variety of dilemmas, the biggest being that AI models frequently reflect human opinions. As we all know, humans are biased. Thus, AI innovations can and do reveal the bias of their creators.
Amazon’s AI hiring system debacle in 2018 is a prime illustration of how unintended bias can play out in real-world case studies. The algorithm that Amazon’s system relied upon was based on historical data, which seemed reasonable at face value. No one realized that the historical hiring data was skewed in favor of hiring men because the company had historically chosen men. To its credit, Amazon acknowledged the problem and stopped using the system. Nevertheless, many of the women who applied and might have been considered for employment by Amazon never got their chance.
AI just isn’t capable of spotting its own mistakes. It’s simply running algorithms that it’s been programmed to process. But the increasing scale of adoption means it is absolutely imperative for the public to get a stronger handle on artificial intelligence ethical issues now before AI innovation becomes even more widespread and embedded into the fabric of modern life.
Taking a Deeper Dive Into AI’s Ethical Dimensions
A natural way to start talking about AI and bias is to lean into the topic of ethics as it relates to AI in general. Unfortunately, when most people discuss the ethics of artificial intelligence, the conversation inevitably tends to gravitate toward talk of robots taking over the world.
Although it’s true that up to 30% of the hours worked globally could be automated by 2030, robot mutiny really isn’t possible while we are still in control over AI’s goals. Therefore, that isn’t the main ethical consideration we need to discuss as AI innovation creators, investors, and users.
Instead, we need to raise questions about how to maintain this control, who bears responsibility for an AI innovation’s decisions, how we can ensure AI doesn’t perpetuate society’s worst beliefs and biases, and how accessibility plays a role in AI ethics. In other words, we need to look at AI from both macro and micro perspectives. Only through a multidimensional lens will we be able to home in and make meaningful changes.
We can’t just treat AI as theoretical, either. It is past the time to take a “head in the sand” approach like the farriers who were afraid of Henry Ford’s Model T invention. AI is here, it is ubiquitous, and it is evolving. To foster and influence AI’s best attributes, we must be proactive about ethics in artificial intelligence in as many ways as we can.
Where should we begin? A good starting point is to tackle the AI-bias conversation head-on. The rate at which the capability of AI is expanding with technology is almost startling. Its climax might be the greatest or worst thing to happen to humanity. If we miss this moment to control and contain AI’s potentially adverse effects, we could find ourselves facing outcomes we never intended — and repercussions that could have been avoided.
Spotting and Stopping Bias in AI
As mentioned above, bias in AI occurs for one reason: The AI product or innovation has been designed to be biased. Rarely is this intentional. Rather, it often occurs because an AI model hasn’t been fully tested or vetted. Regardless of intent, however, the biased AI innovation produces negative results that can have significant repercussions on real people’s lives, as the Amazon example reveals.
What strategies will cover AI in an overlay of ethics and reduce the chance of bias appearing? Successfully mitigating bias in AI models requires a multifaceted approach that incorporates the following mantras:
1. Ensure human oversight in AI innovation development.
For AI to be more empowering to all humans, various types of human involvement must be folded into AI innovations, from development to deployment and beyond. For instance, human-in-the-loop approaches require humans to oversee models and outcomes, applying critical common sense where machines will not. This way, AI isn’t allowed to perpetuate negative repercussions.
Venture capitalists are uniquely positioned to ensure that the AI models and innovations they fund involve human contributions, oversight, and guidance. By insisting on human-in-the-loop or similar oversight structures, venture capitalists can have a direct effect on the AI that enters the marketplace.
2. Make certain that AI product creation follows stringent, consistent security measures.
AI products need to be reliable and trustworthy, not beholden to bias. Those that are embedded with advanced security measures, such as multifactor authentications and code-signing, will be safer and less susceptible to biased conclusions than those that are not.
Similarly, AI systems should legitimize and protect the data they amass and utilize. This can happen by promoting and paying strict attention to regulations. Take the California Consumer Privacy Act (or CCPA), for example, which was enacted in 2018 and is specific to all companies engaging in business in California.
AI products must be out-of-the-box compliant — or able to become out-of-the-box compliant — for users conducting any California-based business operations or arrangements. Those AI products that are not easily modified to meet CCPA, or other expectations, put all stakeholders at risk, from inventors to consumers.
3. Deploy diverse teams to evaluate AI models for bias.
The goal is for all AI creations to be as neutral as possible, but it can be impossible for AI model designers to see bias in their innovations. As noted by economist Daniel Kahneman in his book “Noise,” finding and eliminating bias isn’t easy because it can be tough to spot.
But as frameworks and blueprints that can effectively pinpoint bias are constructed, AI systems and their builders will be able to systematically reduce bias incidents in their products. Scientists are already developing frameworks to make bias discovery less arduous.
The Relationship Between AI, Bias, and Inclusion
It would be remiss to talk about AI, ethics, and bias without addressing the role inclusion plays in artificial intelligence. Regrettably, lack of inclusion and bias frequently go hand in hand: If an AI product is biased, it might exclude members of certain groups. Moreover, members of such groups might be unfairly left out of crucial AI discussions.
For instance, when it comes to inclusion and AI innovations, AI modeling has been out of reach for many would-be founders due to the racial wealth gap. The financial wealth gap between Black and white households is 12 cents for every $1, respectively. Between Latinx and white households, it’s 21 cents per $1. These are huge differences that can make it nearly impossible for Black and Latinx entrepreneurs to self-fund their inventions through to fruition. Unlike founders from white households, who are more likely to be able to tap into generational wealth, founders from underrepresented backgrounds often need to find funding elsewhere.
Venture capitalists and investors are in a strong position to help close the wealth gap by making AI development more inclusive, simply as deciders of which new products and innovations are created. They might just need to adjust the way they evaluate potential projects for their portfolios.
Venture capital, like AI, is a tool that can be leveraged for positive, negative, and neutral purposes. Chipping away at this generational wealth gap by conscientiously correcting the resource imbalance for underrepresented founders is one way venture capitalists can ameliorate economic inequality and promote inclusivity.
AI for the Betterment of Society
AI has revolutionized nearly all sectors in some way. It’s shown itself to be useful and efficient. Yet, it’s not quite as ethical or unbiased as it needs to be. Only through oversight and care can we make sure that AI does right. AI alone has no ability to judge right from wrong, so the responsibility falls to humans to create supportive, sustainable, and responsible AI.
Looking toward the future, perhaps AI will end up forcing humanity to face up to its most deeply held and rarely discussed cognitive biases. That might even be exactly what we need, particularly in a time of social upheaval and political unrest. If AI is a mirror on the psyche of people, it should reflect an image that people are proud to relate to, not ashamed to accept.
Thus far, we’ve done well when it comes to building AI innovations. We just need to bake more awareness and ethics into the recipe to build more responsible AI. Maybe by tackling bias and the ethical issues of artificial intelligence, we’ll end up improving the state of humanity for all the people AI serves.
Dan Conner is the general partner at Ascend Venture Capital, a micro-VC in St. Louis that provides financial and operational support to startup founders looking to scale. Conner specializes in data-centric technologies that enable the future states of industries. Before founding Ascend Venture Capital, Conner worked on the operations side of high-growth startups, leading teams to build scalable operational and financial infrastructure.
Image by DeepMind