AI systems are supposed to facilitate inclusion, diversity and equal treatment.
Gender equality victimized in artificial intelligence (AI) applications is crucial to the goal of bringing about positive social change through use of technology. With artificial intelligence becoming ever more widespread, gender diversity in platform, data and AI governance can present solutions to gender inequities, protect and empower communities facing gender-related violence, and support diversity in the technology industry.
Some crucial issues for gender equality victimized in AI which we need to look at are:
The right to internet:
The women have a very low share in advanced technology jobs which include non-routine, cognitive tasks that are in demand in the digital economy. This is a result of a lack of access to technology— a major problem in India. Data suggests that only 46% of Indian women between the ages of 15 to 65 own a mobile phone as compared to 56% ownership among Indian men.
One way for Indian law to further the rights of women in India is by addressing the principle of Right to the Internet. In 2020, the Kerala High Court recognized that mobile phones and internet access through it are part and parcel of day-to-day life and an essential part of the infrastructure of freedom of speech and expression.
Role of labour:
The women are at a significantly higher risk of displacement than men due to job automation. And over 57% of the jobs that are set to be displaced by digital automation between now and 2026 belong to women – especially midlevel, routine, cognitive jobs, where women dominate. For instance, women working in the gaming industry in the UK, 45% felt that their gender was a limiting factor in their career progression and 33% said they had faced harassment or bullying because of their gender. This calls for an urgent relook at inclusivity practices of women in technology including hiring at the executive level and in hiring of non-male programmers.
This emphasises the need for higher participation of women and gender experts in the process of principle formulations at the foundation level. There also needs to be an improvement in representation of women in technical roles globally and in tech companies’ boardrooms. Companies thus need to create robust gender-inclusive artificial intelligence principles, guidelines and codes of ethics to enable the same.
Some common principles companies include in their policies include terms such as transparency, fairness, responsibility etc. Taking the example of the term ‘fairness’, a study reveals that till date there is no unified definition of algorithmic fairness. The need for inclusion of more feminist principles such as access to internet, languages, information to make informed decisions and privacy are still not common.
Data sets, the starting point:
Gender equality victimized in artificial intelligence, data sets may not keep this in mind, even if it is “apparently” women centric. A data set is a collection of data which is treated as a single unit by the computer that processes it. This means that separate pieces of data are used to train an algorithm to predict a pattern inside the whole data set. Data sets are the first step in creating any AI model and hence crucial to ensure the model is without bias.
Women are a multifaceted and heterogeneous group and face diverse realities. These include women living in rural and remote areas, indigenous women, women from ethnic or religious minorities, women living with disabilities, HIV/AIDS, etc. Any gender equality victimized in AI generated information depends on patterns, predictions and recommendations which are a reflection of the accuracy, universality and reliability of the data used and the inherent assumptions and biases of the developers of the algorithms that use this data.
How gender bias creeps in?
For instance, “hers” is not recognized as a pronoun by the most widely used technologies for Natural Language Processing (NLP), including in such programmes created by Amazon Comprehend, Google Natural Language API and the Stanford Parser. Another shocking example happened in 2019, when an Apple application was found to offer smaller credit lines to women than to men with similar credit scores. The company stated its algorithm was gender-blind but admitted that the algorithms used to set limits were inherently biased against women.
Preventing such gender equality victimized in AI biases in software applications calls for better corporate governance. This includes diversity in hiring and retention practices, and enabling a work culture where gender equality principles are explicit and prioritise accountability.
A computer is only as good as the people behind it. That is a fundamental aspect that needs to be kept in mind while training and implementing AI solutions for better gender equality. Looking inward, during the pandemic, India failed to address the needs of women or the issue of their access to the internet and digital services.