Diversity & Ethics In AI: A Reflection Of Its Designer

      Posted by Chandana Madaka on Jul 4, 2019 in Thought Leadership

      Corporations have revamped their take on diversity and championed initiatives to draw in talent, but as Dr. Kallepitis, a senior data scientist at Strands, points out, the outlook in fields such as Artificial Intelligence (A.I.) remains troubling. The consequences of this diversity problem extend far, affecting the rising FinTech space and challenging the ethics of unconsciously biased technologies.

      What’s the buzzword companies love? Diversity

      Many aspects of society and even history are riddled with cases of conscious bias, which later embedded themselves so deeply in daily functions that they are now unconscious. Our brains are built to make unconscious decisions, as we would otherwise be overwhelmed with the choices we face daily. It’s difficult to identify our personal unconscious biases and be honest about what stereotypes influence our behavior. In the world of FinTech, unconscious biases can bleed through to affect decisions on the product design level, impacting real lives. When these biases eventually propagate to affect the generated data, explains Dr. Kallepitis, we need to be wary of the impression this has on A.I. algorithms.

      Whether it be facial or speech recognition, convenient robot vacuums, or an algorithm evaluating loan-seekers, A.I. is now a permanent part of the terrain. Small businesses can check cash flow, categorize customer spending, and gain insights from invoice tracking, all from an integrated solution. Besides, the nature of A.I. does bring a host of ethical concerns. Data scientists define the questions and analyze the data, but the problem at the core of A.I. is that it will compound the bias in its logic; it’s all that it knows how to do. Even the most carefully constructed algorithms have only real-world data to pull from, rendering it dependent on the whims and imperfections of the world.

      Human bias in Artificial Intelligence - how do we fight back?

      A meaningful way to fight this flaw is by ensuring developer teams are well-rounded and diverse, and not just in the most obvious definition of the word. Companies tend to think of "diversity" in terms of gender and nationality, as well as age. Unpacking further, diversity is multi-faceted with identifiers such as heritage, religion, and culture, which many typically overlook. When serving large populations with technology, companies must take all of these characteristics into consideration. The consequences of not capturing the full scope of diversity are, to say the least, undesirable. 

      Continuing this strand of thought, people of color face bias from facial recognition algorithms, as the technology is sometimes unable to detect facial features that stray from the cultural norm. Up and coming in the FinTech space, chat bots are the next A.I. step that banks and FinTech providers will take to assist customers. Improving customer conversations is a top priority of the industry, as chat bot programs prompted worries with responses that condone sexual abuse towards women. Take, for example, Apple Siri's response to verbally abusive phrases: “I’d blush if I could.” In a publication titled that exact phrase, UNESCO denounced the flirty and submissive responses in Apple Siri’s program for reinforcing the image of women as complicit, bringing ethics questions to the forefront.

      Currently, women make up only 12% of the A.I. field’s researchers, and this points to the importance of encouraging more to enter STEM fields. As overwhelmingly white male engineering teams build these A.I. systems, they inadvertently code in their biases of the world. Yes, the solution researchers seek is to improve by building more diverse teams that will represent as many cultural norms and backgrounds as possible. Strands endeavors for company-wide prioritization of diverse practices from the executive level down. Our Machine Learning (Nous) team reflects these results, comprised of five nationalities and 40% women. This is definite progress, but it is simply not enough.

      Diversity training programs are present in workplaces to combat our personal biases from entering the work we produce, the algorithms we design. As the tide of innovation continues, A.I. increasingly creates competitive advantages in financial services, which poses a remarkable impact on the global economy. 

      Honing in on the FinTech space, a lot of our work has a substantial impact on individual lives. Dr. Kallepitis points out an example of A.I. reinforcing societal biases in the case of evaluating loan borrowers. For instance, the algorithm builds each person's profile by checking data on who has successfully kept up with their loan for the last five years. However, A.I. algorithms take in data influenced by a society in which certain social groups historically benefitted. The algorithm then concludes that these groups will be more loyal borrowers. In today’s more inclusive and multicultural society, this is not an acceptable reality. An algorithm can only learn from the data that is there, but as the saying goes, if you give garbage in, then you can only get garbage out. The only way to fight this bias is to fix that data.

      “A.I. is made for humans, by humans. It’s up to us to make sure the way A.I. affects our lives is ethical and human-centric. The burden falls to A.I. developers, and it is going to take a change at a cultural level to achieve this.” - Charis Kallepitis

      Fixing the data is no easy task, especially in the corporate world. If you start the process by building more diverse teams in companies, you risk pitting diversity and merit against one another in employee recruitment processes. Alternatively, companies can choose the human intervention route, which pinpoints the lack of inclusion in the data and acts upon it. In the absence of balanced, comprehensive data sets, human expertise comes into play with complementary tools from economics, sociology, or other sciences that can counteract the bias, says Kallepitis. 

      Ultimately, the society that shaped these unconscious biases must step in to prevent systemic inequities, making steady strides towards the future.


      If you are interested in finding out how Strands can help your bank, or if you would like to get a Free Demo of our AI-powered Financial Management solutions, please fill out this form and one of our Sales Reps will get back to you as soon as possible.


      Chandana Madaka
      Chandana Madaka

      Chandana has experience in marketing strategies, company processes, and data analytics. She is currently enrolled at the University of Texas pursuing a BBA in Business Honors, a BBA in Finance, and a minor in Management Information Systems.

      Get Our Newsletter

      Subscribe to our exclusive weekly newsletter to stay up to date on
      FinTech trends, insights, and analysis