Sign up to get full access to all our latest content, research, and network for everything customer contact.

Bias Becoming Latest Trend In AI

Add bookmark
Google

“The nature versus nurture debate involves the extent to which particular aspects of behavior are a product of either inherited (i.e., genetic) or acquired (i.e., learned) influences. Nature is what we think of as pre-wiring and is influenced by genetic inheritance and other biological factors. Nurture is generally taken as the influence of external factors after conception, e.g., the product of exposure, life experiences and learning on an individual.”

Read More: Common Phrases to Avoid When Dealing with Everyday Customers and Potential Clients

AI generated algorithms are like puppies. Just like a newborn pit bull, an algorithm inherits influences. Also like a newborn pit bull; an AI-generated algorithm acquires and learns influences over time that dictates its behavior. A puppy, similar to a new machine-learning software operated by AI (a chatbot, for example), can either be an enjoyable thing to have, or a liability, depending upon how its nurtured. 

Sarah Wysocki case

AI systems are forecast to rake in $77.6 billion by 2022 – a 200 percent increase on the $24 billion projection of 2018.  In an article I wrote last month on how companies should know if they need to acquire a chatbot or not, I mentioned “it takes transcripts from that volume of interactions to generate data needed to properly train the A.I. that powers the virtual agent. Yes, bots relying on cognitive, machine-learning technology can evolve on their own, but they still require massive amounts of existing data to get started.” In other words, machine-learning algorithms are only as efficient as the quantity/quality of data inserted into algorithms (like in a chatbot), as well as the system that aggregates the data to quite literally help the system “learn” over time. 

Sarah Wysocki was a fifth grade teacher in the D.C. school district, trained in emotional and ethical development. In a 2011 case, she was fired, despite being revered by parents, students, and administrators. At the time, a new machine-learning AI system that served as the school’s KPI misjudged her performance and labeled it subpar. It’s not exactly clear, because the system was too complex to be understood by those who fired her. What was her verdict? The school district claimed that she should have spoken up sooner and that nothing could be done now. She no longer works there. 

Amazon recruiting tool

The story of Sarah Wysocki is a horrible story but it shines light on a topic that is becoming increasingly relevant in the multibillion dollar industry of machine learning. According to HBR, “Amazon was forced to scrap an AI-powered recruiting tool because they could not remove gender bias from the results. They were unfairly favoring men because the training data they used taught the system that most of the previously hired employees of the firm that were viewed as successful were male. Even when they eliminated any specific mention of gender, certain words which appeared more often in male resumes than female resumes were identified by the system as proxies for gender.”

Read More: Is Surveillance Technology Emerging as a New Customer Service Trend?

Automation is a lucrative industry that many businesses have and will continue to rely on and profit from. That is a necessity in the modern financial health of capitalism. However, faulty AI can result in machine-learning that literally can learn something like false performance metrics (as seen in the Sarah Wysocki case), gender bias (as seen with new Amazon hires), and most recently, manipulative search bias.

Google confirmation bias

IBM estimates that 90% of the world's online data has been created in the past two years. Much of that content is explored through the world’s most powerful search engine and the reigning champion of manipulative data, Google. Last December, Google CEO Sundar Pichai appeared on Capitol Hill for the first time to testify before Congress about a number of controversies surrounding the search giant over the past year, from illegal data collection to political biases. 

Commenting on the allegations, Gabriel Weinberg, CEO and founder of DuckDuckGo told Yahoo Finance “Google is manipulating search and news results to bias them towards what it thinks it knows about people, based on the troves of personal data it has on them,” said Gabriel Weinberg, CEO and founder. “This filtering and censoring of search and news results is putting users in a bubble of information that mirrors and exacerbates ideological divides.” 

While this shouldn’t be breaking news to us, the extent to which our searches and opinions (or worse, someone else’s) are supported through user data and confirmation bias based on previous history is a huge problem for consumers. In the words of Warren Buffett: "What the human being is best at doing is interpreting all new information so that their prior conclusions remain intact."

Tune in to our live event, The Contact Center of 2025, on December 3rd and 4th to get ahead of competitors and introduce yourself to analysts’ predictions and upcoming cutting-edge technology trends at our complimentary online summit. Everyone knows the right questions to ask in customer centricity, but fewer analysts know the answers than CCW’s world-class experts and contributors.  


RECOMMENDED