Ever since AI has barged onto the scene, we as human beings have insane requests. We want it to draw things that we are envisioning, we want it to write numerous things with perfect grammar, punctuation and with absolutely no mistakes. But AI is still only a child that is learning from us. We feed it all sorts of information from the world around us and it understands only that as the ‘correct’ information.
For example, if we fed it information that showed cats having 3 legs as normal, or women staying at home as normal, it only understands that as the correct image of a cat or a woman. AI cannot look around and see that cats do have four legs and cannot understand that women can opt to work or stay at home as situation demands. There is no logical reasoning as yet with AI. It only vomits out information that is fed to it and draws appropriate images or writes appropriately as is fed to it. It is not able to think and draw or write as yet. And therein creeps the problem of ‘fairness’.
Google recently found itself embroiled in several controversies with it image generation part of AI platform ‘Gemini’. In one controversy, it represented German Second World War soldiers as people of color. Google CEO apologized for biased and offensive images.
In yet another controversy, it generated images of Pope and founding fathers of US with different ethnicities.
As a project that is still undergoing changes, AI is prone to have gender bias, racial bias and many other discriminatory challenges.
Google apologized for the errors that caused AI bias and immediately pulled the image generation platform off to make more changes and tune it further. You can read the entire post of Google’s apology here.
It can be understood that image generation and content generation by AI is still a work in progress. But how do we ensure AI fairness in the real world? Here are some features that models are undergoing:
- AI fairness is probably a very difficult to achieve. What is offensive to one set of audience may not be offensive to another set of audience. Hence, AI fairness goals can be set and the AI model can try and work towards that.
- Next, AI fairness can be ensured by training it with diverse sets of population and without bias and holding good ethics.
- We should also evaluate the data regularly work on detecting any inequalities and removing/correcting it early on
- We can try to work with multiple stakeholders and people from different professions to detect unfairness and remove it. Multiple eyes on a data set will give us more insights and help us remove any irregularities in the data.
We saw ‘AI fairness’ and how it can be achieved in this post…let us move onto more exciting AI concepts next…
This post is for BlogchatterA2Z2024!
How can we expect to have a quality of AI, which humans do not possess? 🙂
True😄😄 we cannot train it on anything that we don’t have…