IBM-DEVELOPER-DAY-Technology-Software-sales-consulting-support-services- maximo-tririga-app-scan-mobile-first-code-cafe-amstar-IBMDEVDay
What did IBM Developer Day teach us?
March 19, 2019
IBM-DEVELOPER-DAY-Technology-Software-sales-consulting-support-services- maximo-tririga-app-scan-mobile-first-code-cafe-amstar-IBMDEVDay
AI isn’t A Danger to Society or Future. Get Re-Trained to get a New Collar Job
March 27, 2019
IBM-DEVELOPER-DAY-Technology-Software-sales-consulting-support-services- maximo-tririga-app-scan-mobile-first-code-cafe-amstar-IBMDEVDay

With AI becoming all pervasive in Technology, How Robust is your Chief Bias Officer?

In a first-of-its-kind study conducted with artificial intelligence algorithms, the Royal Historical Society revealed a surprising shocker. This is called as the “Massive Gender Bias” plaguing across the U.K. workforce in 2019.
While the AI-powered system scanned the entire U.K. Internet, examining the current breakdown of existing positions for men and women. It assessed that more than 108 different types of economic sectors and found that nearly 87 percent of these fields had a preferential bias towards Men over Women. This meant that Men accounted for a disproportionate share of senior leadership positions over Women.
While the pioneering study underscores AI’s immense potential to help illuminate our world from a multi challenge scenarios. The most powerful capabilities of AI lies in analyzing vast amounts of data and find patterns at ease. But while AI can help uncover biases and inequities it can actually make the journey worse after all, as one or two will definitely take it personally.

Here are a few anecdotes where AI has proved its mettle. When Ghanaian-American computer scientist, Joy Buolamwini – a Rhodes Scholar and researcher at the MIT Media Lab was a graduate student, she found that an AI-powered facial recognition system she was using could not identify her facial features and recognized her presence only once she donned a white mask.
In Joy Buolamwini’s case, the facial recognition system failed to recognize her because the data used to train the software had a very limited pool of mostly fair skinned faces and most importantly lacked sufficient diversity of facial recognition patterns.
In other words, the problem began with the data itself, and was to be responsible for when artificial intelligence got “smarter and smarter” and the more it learns from this flawed data set. It however failed as uploaded patterns were very limited.
Through the Algorithmic Justice League, an initiative Buolamwini launched to highlight AI biases and train organizations in best AI practices, she has become a catalyst for fairer, more ethical AI approaches and in an era of AI-driven decision-making that mission is only becoming more crucial. The emphasis is crucial as we are entering an era of affirmation based marketing.
While Companies must re-imagine not only data sets and algorithmic training, the process and personnel surrounding the AI as well, it has also extended to other domains from criminal justice to hiring and recruitment etc.
As an IBM white paper notes, that there are more than 180 human biases which have been defined and classified clouding judgments and influencing decisions. These biases can be easily replicated by AI effectively with a skilled staff. Say a company uses AI to make salary decisions for its employees, based on part on pay history. That’s likely to put women at a substantial disadvantage only because they’ve been discriminated against in the past. Recommendation engines for example learn from users preferences and suggest accordingly. They key differentiator as a technocrat is to have visionary and sound judgment parity to eliminate bias. As with other problems, the first step to overcoming it is to acknowledge that it exists.
Companies must re-imagine paradigms about data sets and algorithmic training and also simplify process and personnel surrounding AI dimension to get best outcomes. While tackling these questions will be crucial as we move ahead to 2020, creation of niche job roles like “Chief Bias Officer,” becomes relevant and paramount as these people will not just scrutinize data but also ensure rigorous ethical standards to combat bias as AI tools become more omnipresent and all pervasive in our work environments.
While AI is ubiquitous the “Chief Bias Officer” could for instance, affect hiring positively by re-engineering teams by pushing for diversity, then regularly checking the data inputs the engineers choose to give the right or best AI algorithms. This is an example of “Personnel Effectiveness Policy.”
Lastly recognizing that AI is not flawless is very essential, as one pill does not work for all ailments. Bad inputs mean Bad Outputs. A Chief Bias Officer should thus up the ante by conducting regular performance reviews with controlled tests of algorithms and thus examining outputs, with continuous fine-tuning for overall betterment of organization with no biases.
By ensuring this holistically Chief Bias Officer is able to address a diverse range of backgrounds perceptions and can prevent distortions and blind spots at work. While Buolamwini advises organizations to check algorithms for different sets of biases at workplaces, a fresh range of multi perspective approach will enhance this effort, resulting in less biased data, and algorithms that are better coded for more equitably and parity.
While repairing algorithmic biases will entail a joint effort of management, HR and engineers, working together to keep conscious and unconscious human prejudices out of systems. We should also not forget that we have the element of human folly and that can never be fully eliminated.
However being cognizant can forge fairer, more intelligent AI outcomes that can truly make our workplace decisions less artificial and more intelligent with a dynamic workforce.
Want an AI Audit/ Gamification done at your Workplace? Get in touch with us today.

Leave a Reply

Your email address will not be published. Required fields are marked *