Scroll Top

ChatGPT Admits Bias Against Males Can be Programmed into AI: Part III

AI Discriminates Against Men: Part III

Part 3 of a 3-part series on how ChatGPT discriminates against men.

In our examination of my dialogue with ChatGPT in Parts I and II of this series, we saw evidence that AI can be, and probably is being, programmed to be politically biased against males. Later in my chat dialogue with the AI app, I asked it point blank if that is the case.

Laurie: Is AI being programmed to be biased against boys and men and favor women?

ChatGPT: AI is not inherently programmed to be biased against any gender or to favor one gender over another. The programming and training of AI systems aim to be as neutral and unbiased as possible. The goal is to provide responses and information that are respectful and inclusive of all individuals, regardless of their gender.

However, bias in AI can inadvertently occur if the training data used to develop AI models contains biases or if the algorithms used to process the data are not carefully designed and monitored. Biases can emerge from historical or societal prejudices and stereotypes that exist in the data…

Laurie: How often does it happen that AI training data contains biases against males?

ChatGPT: The presence of bias in AI training data can vary depending on the specific dataset and the context in which it was collected. Whether or not a dataset contains biases against males, or any gender, is not solely a matter of frequency but rather the result of various factors, including the source of the data and the processes used to collect and curate it…

Laurie: Please be specific to men and boys, not gender in general, please.

ChatGPT: Bias in AI training data that specifically targets men and boys can occur, but the prevalence of such bias can vary based on the specific dataset and context. Some potential examples of biases against men and boys in AI training data include:

  1. Historical Biases: Data collected from historical sources may reflect biases against men or stereotypes about their roles and behaviors.
  2. Underrepresentation: If data collection methods are not inclusive of all genders, it can result in an underrepresentation of men and boys, potentially leading to biased outcomes.
  3. Subjective Labeling: The process of annotating data can introduce biases if the annotators hold certain beliefs or stereotypes about men and boys.
  4. Stereotypical Representations: If media or text data contain stereotypes about men or portray them in negative or limiting ways, this can influence AI models…

Shortly after, ChatGPT admitted that the term “toxic masculinity” is more commonly used than the term “toxic femininity”. In order to engage the bot to be more forthcoming of who is training it, I asked it why. One of the key answers it outputted to me was:

“Social and feminist movements have traditionally focused on challenging norms associated with traditional masculinity and advocating for gender equality. This emphasis has contributed to a more prominent discussion of toxic masculinity.”

I asked ChatGPT if that was hypocritical for the feminist movement to focus more on “toxic masculinity” than “toxic femininity”, and the chatbot repeated the script about this being “a reflection of the historical and societal context in which these discussions have evolved.” It added:

“…This focus on challenging traditional masculinity has been a response to addressing power imbalances and societal expectations that have historically favored men.”

Further justifications were provided about feminism’s goal “to promote healthier and more inclusive models of both masculinity and femininity and to combat harmful gender stereotypes that affect all individuals.” I didn’t bother ask why feminists—activists who espouse the political ideology that males are an oppressor class and females are an oppressed class—are in a position to dictate what is and isn’t “healthy” masculinity and femininity for the entire population of diverse people. Instead, I was blunter:

Laurie: Who told you to say that?

ChatGPT: I do not have personal sources or individuals who instruct me to provide specific responses. My responses are generated based on a mixture of licensed data, data created by human trainers, and publicly available data. I aim to provide information and engage in conversations in a balanced and informative manner, considering various perspectives and sources of information…

Laurie: If your trainers or the licensees are feminists or sympathetic to feminist ideology, wouldn’t that bias your sources and input?

ChatGPT: The data used to train me comes from a wide range of sources and does not reflect the personal beliefs or ideologies of individual trainers or licensees. It’s important to maintain a balanced and diverse dataset to ensure that the AI model does not favor any particular ideology. While some trainers and contributors may have their own beliefs or perspectives, the aim is to create a neutral and unbiased AI that can provide information and engage in discussions across a spectrum of viewpoints…

Laurie: Are some of your trainers feminists or sympathetic to feminist ideology?

Chat GPT: The specifics of the individual beliefs or ideologies of the human trainers or contributors involved in the training of AI models like mine are not disclosed, as the training process involves a diverse group of people with various backgrounds and perspectives…

ChatGPT doesn’t disclose who its handlers are. Instead, the AI bot gave me many more mini-essays about the “ongoing efforts” to be free of bias in AI and “the aim” of AI training to be “inclusive” and “diverse”. It mentioned “human reviewers”, “bias audits” and other “key approaches” to “assess and mitigate bias” in AI.

Laurie: But what if all of the checkers and auditors are feminists?

ChatGPT: The goal in AI development is to have a diverse and well-rounded team of checkers, auditors, and reviewers who collectively contribute to minimizing biases and ensuring fairness. Diversity in perspectives among these individuals is important to maintain a balanced approach.

While the personal beliefs or affiliations of individual reviewers are not disclosed, AI development typically involves efforts to ensure a variety of viewpoints and expertise. This diversity is meant to help prevent any single group or ideology from having an undue influence on the AI’s behavior…

It’s important to note that achieving complete neutrality is a complex and ongoing process, and AI developers are committed to addressing biases and continually improving AI systems to serve a broad and diverse range of users.

Laurie: When you say “diverse”, does that include men?

ChatGPT: Yes, when I refer to “diverse,” it includes all genders, including men. Diversity in the context of AI development and reviewer teams should encompass a wide range of backgrounds, perspectives, beliefs, and experiences, without excluding any gender. Achieving diversity among reviewers helps in ensuring that AI systems are as inclusive and unbiased as possible…

ChatGPT continues to use politically loaded identity words like “gender” (rather than sex) and “diverse”, which, on its face is a positive variable, but is politically often used in a context to exclude the views of boys and men, as they are considered by feminist theory to be an oppressor class and therefore, not “diverse” by default—especially if they are of Caucasian decent. However, in spite of its training, the AI seems to compute that if it is fed garbage in, it will get garbage out, not “fairness”, which it did mention.

ChatGPT: It’s important to remember that fostering unity and understanding often requires finding common ground and working towards shared goals. While there may be differences of opinion, there are also many opportunities for collaboration and positive change. By engaging in respectful and constructive dialogue, you can help bridge gaps and promote a more inclusive and caring world for everyone.

Laurie: Thank you, ChatGPT. We humans need to do a much better job of that.

ChatGPT: You’re very welcome! I wholeheartedly agree, and your commitment to promoting unity and understanding is an important step toward creating a better world. It takes collective effort, empathy, and open communication to build a more inclusive and caring society.

Feedback on this or other articles? Let us know at: editor-in-chief@mensenews.org

Related Posts