Artificial intelligence is increasingly making headlines and garnering increased funding, drawing attention to a wide spectrum of questions ranging from the blurred lines between human and AI-generated art to the current state of AI and accessibility. The public and private sectors are both quickly learning that AI can accomplish amazing things when fed the right data set. At this crucial time of rapid innovation, it’s critical that we remember not to overlook inclusion in AI.
We’re already seeing two main reactions to AI’s implementation in our daily lives: fear and excitement. The fears around AI (such as its potential sentience or the myth that it will replace us) are no different than those raised by computers, the internet, cars, and all other innovative technologies. However, the participatory excitement must also focus on AI ethics and bias.
AI, like any technology, can favor certain demographics. Research has shown that statistical, human, and system biases all play roles in the outputs we get from the technology. The neutrality of everything from Google’s and Bing’s search algorithms to Amazon’s hiring AI has been shown repeatedly to be flawed, and ethical issues with AI should be a top priority for all leaders.
What Is the Current State of AI Inclusion?
It’s not always easy to see biases, but they’re laid plain when using AI-generated art programs such as DALL-E 2 and MidJourney. Because they generate images based on the general historical consensus of what was loaded into the AI, these programs often display stereotypes of race, gender, and style.
For instance, ask these programs to generate an image of a pilot, and you’re most likely to be presented with a picture of a white man flying a plane. Contrastingly, ask for an image of a flight attendant, and you’re most likely to get a picture of a white woman walking the aisles in the skies. People who had the means to create imagery throughout museums, Hollywood, and the internet are more likely to be portrayed. This same bias can be found in AI writing, facial recognition, financial analysis, recruiting, and more.
Further, many AI technologies are expensive to access, so the more means you have, the more likely you are to be aware of them. And unfortunately, the wealth gap is widening: For every dollar the median white family holds in wealth, the median Black family holds 12 cents, and the median Latinx family holds 21 cents. The top 1% of people own 31.8% of the world’s wealth, and the majority of wealth (77.1%) is owned by the top 10%. The effects reverberate sharply across technology accessibility.
Additionally, major companies such as Meta, Alphabet, and Netflix became global juggernauts because they deployed the right combinations of innovative technology, business models, and venture capital funding. VC funding was a key ingredient in these businesses, but because studies have shown that investors spend 18% more time on pitches from all-male teams, a funding gap bars the door to diverse groups building transformative companies.
But, creating diversity and inclusion in AI is not easy, and we don’t always recognize when we’re not being inclusive. Sometimes, we do everything we can to ensure DEI. Still, more subtle forms of discrimination appear, such as Amazon’s aforementioned hiring algorithm preferring aggressive words and phrasings used almost exclusively by men.
To simplify this, here are three crucial components we can take to raise awareness of these gaps, improve inclusion in AI, and achieve accessibility for all:
1. Transparency in AI
The most important hinge point to increasing accessibility in AI development is creating transparency. For instance, when AI technology is used on public data, the biases (such as racial discrimination in face-recognition platforms used by law enforcement) should clearly be acknowledged and addressed proactively.
It’s vital to help people better understand the models they’re working with. Raising awareness will ease many of the fears surrounding them, such as misplaced sentience and superhuman linguistic knowledge.
2. Security and Privacy
AI can be exploited, breached, and leveraged in ways developers do not intend. There are myriad security and privacy concerns stemming from AI databases, especially those that scrape public data. Developers need to provide secure and private platforms that instill trust in the technology.
3. Making AI Accessible
Finally, we need to make AI more accessible to people of every functional ability, socio-economic background, gender, age, etc. The more people who actively contribute their perspectives to the data sets, the more diverse data the AI can pull from, and the more accurate its outputs will ultimately be.
AI is a revolutionary new technology already making waves as specialized machine-learning algorithms are trained in specific tasks and workflows. However, it’s vital that we never lose the threads of diversity and inclusion in AI development, as biased data sets, math, and more can very easily skew outcomes, leading to real-world consequences. So long as we keep our ethical underpinnings, this technology will exponentially improve and overcome unbelievable obstacles in our everyday lives.
Image by Annie Spratt