Demystifying Machine Learning for Gender Equity and Social Inclusion (GESI) Advisors

By Eunice Musyoka

Machine learning (ML) has the potential to bring significant benefits to Gender Equity and Social Inclusion (GESI) programmes by enabling data-driven decision making and uncovering hidden patterns and trends. However, machine learning can also perpetuate existing biases and inequalities if not designed and implemented with care. In this blog, we will demystify machine learning for GESI advisors and provide some tips on how to ensure that machine learning models are inclusive and equitable.

What Is Machine Learning?

Machine learning is a subfield of artificial intelligence that involves training computer algorithms to learn from data and make predictions or decisions without being explicitly programmed. Machine learning algorithms can be trained on a variety of data types, including images, text, and numerical data.

How Can Machine Learning Help GESI Programmes?

Machine learning has the potential to enhance the effectiveness of GESI programmes by:

  1. Identifying patterns and trends in large datasets that may be difficult to discern using traditional statistical methods
  2. Providing a tool for predicting outcomes and identifying areas where intervention may be most effective.
  3. Automating the analysis of data, allowing for more efficient and timely decision making.

However, there are also risks associated with using machine learning in GESI programs. One major concern is the potential for bias in the data or the algorithms used to analyse it. Other risks include poor problem solution alignment, excessive time or monetary cost, and unexpected behaviours and unintended consequences 

It’s important to identify, assess, and manage common machine learning risks. Some of the common ML risks include: Weak or biassed data;   Insufficient and unrepresentative training data, lack of strategy, experience, and appropriate skill sets; security vulnerabilities; regulatory challenges; and third-party risks.  By adopting strong protocols and getting the right talent, strategy, and skills in place, and doing your own due diligence, you will be able to identify machine learning risks—and position yourself to take full advantage of this amazing technology. 

How Can Machine Learning Perpetuate Bias?

Machine learning algorithms can perpetuate bias in a number of ways:

  1. If the training data used to develop the algorithm is biassed, the algorithm may learn to reproduce those biases. For example, if a machine learning algorithm is trained on historical data that contains gender biases, it may learn to perpetuate those biases in its predictions 
  2. If the features used to train the algorithm are themselves biassed, the algorithm may learn to reproduce those biases. (Pratt n.d.).
  3. If the algorithm is not tested for bias, it may reproduce biassed results. For example, if a machine learning algorithm is used to screen job applicants, it may inadvertently discriminate against certain groups.

Therefore, it is important to test algorithms for bias—because if there are inherent biases in the data used to feed a machine learning algorithm, the result could be systems that are untrustworthy and potentially harmful Biassed AI systems that are implemented can cause problems, especially when used in automated decision-making systems, autonomous operation, or facial recognition software that makes predictions or renders judgement on individuals (World Economic Forum 2021).

How to Address Bias 

Some of the ways to address bias in ML are:

  1. Identify potential sources of bias and set guidelines and rules for eliminating bias and procedures (DeBrusk 2018)
  2. Identify accurate representative data and document and share how data is selected and cleansed.
  3. Evaluate models for performance and select least-biased, in addition to performance.
  4. Monitor and review models in operation.

How Can GESI Advisors Ensure That Machine Learning Models are Inclusive and Equitable?

To ensure that machine learning models are inclusive and equitable, GESI advisors can take the following steps:

  1. Ensure that the training data used to develop the algorithm is diverse and representative. This may involve collecting additional data to ensure that underrepresented groups are adequately represented in the training data (Stoltzfus 2023).
  2. Collect more diverse and representative training data. This is often touted as a remedy for the disparate performance of machine learning predictors across subpopulations. However, a precise framework for understanding how dataset properties like diversity affect learning outcomes is largely lacking (CIPD 2019). Diversity will be crucial to support ethical and transparent AI from diverse perspectives, to make sure the data used to train AI is diverse and to ensure we have a handle on how these tools will change our lives (AWS Public Sector Blog Team 2019).
  3. Test the algorithm for bias using techniques such as fairness testing and sensitivity analysis. Fairness testing involves evaluating the algorithm’s performance across different groups to ensure that it is not systematically biassed against any particular group. Sensitivity analysis involves evaluating the algorithm’s performance under different assumptions and scenarios to ensure that it is robust to changes in the data or the model (Chen et al. 2022).
  4. Monitor the algorithm’s performance over time and make adjustments as needed to ensure that it remains inclusive and equitable. Machine learning algorithms may need to be retrained periodically to ensure that they continue to produce accurate and unbiased results.
  5. Understand the context. Understand the context in which the machine learning model will be used. This includes understanding the social, cultural, and economic factors that may impact the model’s performance.
  6. Identify potential biases in the data used to train the machine learning model. This includes identifying any gaps or biases in the data that may impact the model’s performance.
  7. Evaluate model performance. GESI advisors should evaluate the performance of the machine learning model to ensure that it is inclusive and equitable. This includes evaluating the model’s accuracy across different demographic groups.
  8. Mitigate biases. GESI advisors should work to mitigate any biases identified in the data or model performance. This may include adjusting the data used to train the model or adjusting the model itself.

Machine learning has the potential to bring significant benefits to GESI programs by enabling data-driven decision making and uncovering hidden patterns and trends. However, it is important to ensure that machine learning models are inclusive and equitable by taking steps such as ensuring diverse and representative training data, testing for bias, and monitoring performance over time. By doing so, GESI advisors can harness the power of machine learning to promote gender equity and social inclusion.

Additionally, GESI advisors can also work to promote inclusion and fairness by using algorithmic techniques such as transfer learning and Learning to Learn. These techniques can help lower the bar to entry by enabling organisations to build custom models with smaller datasets than typically would be required.

It is important to note that fairness in machine learning is an exciting and vibrant area of research and discussion among academics, practitioners, and the broader public. The goal is to understand and prevent unjust or prejudicial treatment of people on the basis of race, income, sexual orientation, religion, gender, and other historically marginalised groups when they manifest in algorithmic systems or algorithmically aided decision-making.

0 Comments

Your email address will not be published. Required fields are marked *

Skip to content