Scroll Top

ChatGPT Admits Bias Against Males Can be Programmed into AI: Part II

Anti-male bias in Chat GPT

In Part I, we saw some evidence that the public AI chatbot, ChatGPT is likely being programmed to be politically biased against males. ChatGPT suggested that there is “historical context” (which it didn’t define) that makes it less acceptable to make a joke about women than a joke about men. I challenged the AI that it is incorrect. I explained that historically, boys and men have suffered many abuses and inequalities that should cancel out any “historical context” leaning one way or another. Let’s take a deeper look at one of the “solutions” ChatGPT gave me when I challenged it about “historical context”.

ChatGPT: In discussing these topics, the aim is not to downplay the challenges or issues faced by men and boys but to provide a more nuanced understanding of historical and contemporary gender dynamics. It’s important to promote empathy and support for all individuals, regardless of their gender, and work towards a more equitable and inclusive society.

At first glance, it appears that ChatGPT is acting egalitarian… It almost seems to cue for us to all join hands in a circle of song and string flowers in our hair. But upon closer look, the text bot spits out a very politically trendy and volatile word that it didn’t learn on its own: “Equitable”. Although the words sound similar—and that’s the bait and switch— “equity” isn’t “equality”. Equity is a concept that is in constant
flux, perpetually in a state of morphing, with no end point. Equality, on the other hand, is a specific, empirical goal post.

Equity can never be achieved, because the goal posts keep moving—it is a post-modern concept of uncovering perpetually more grievances that feminist academians and influencers have against males, and then setting the bar higher and higher of what women deserve as pay back (e.g., Although the majority of advanced college degrees are awarded to women, equity cannot be achieved as long as men
are achieving any significant number of advanced degrees, as men are seen as an oppressor class and women as an oppressed class. Therefore, any advanced degrees earned by men are potentially oppressive to women).

Meanwhile, equality is specific, as in, “When women and men have equal rights under the law, then equality has been achieved.” (For example, once the Civil Rights Act of 1964 was encoded into federal law, women and men both had legal standing to take legal action if they believed they experienced sexism in the workplace.)

If Chat GPT were programmed in the 1980s, it would have used words like “equality” and “fairness” in its suggestion of what type of society we should “work towards”, as these values were still our cultural guideposts at the time. Words like “equitable” and even “inclusive” are contemporary to the 2010s and 2020s, birthed by globalists, influencers, and corporations with goals of stoking dis-unity through the mirage of equity—finish lines that never materialize. This keeps classes of people contemptuous against one another, suspicious, and guarded, and such a population is easy to manipulate, control, herd about, inflame, and indoctrinate. As a result, we as a society aren’t more “inclusive”, either. There is an overtone in ChatGPT’s statement, by it using the word “gender” instead of “sex”, that to be “inclusive”,
we must prioritize multiple gender classes (potentially infinite mental identities) in lieu of the male sex (one of two sexes biologically occurring in nature).

Indeed, ChatGPT betrays its handlers by mimicking—and revealing—the political buzz words fed to it.

Feedback on this or other articles? Let us know at: editor-in-chief@mensenews.org

Related Posts