I was going to write about the latest court decision where a Reagan-appointed judge said the Trump administration's anti-DEI orders are racist and homophobic, true story.
On instagram I wrote a post to a woman who had given an analysis of a musician. I asked her to give an analysis of The Pointer Sisters because I didn’t think the Black community fully appreciated their contribution. I was told my post would not be sent because I used the word(s) Black community and it was or could be considered offensive.
The AI Large Language models are probabilistically based and incorrectly correct words chosen by human beings to express their experience, feelings, and thoughts. We have listened to AI Generated voices incorrectly enunciate the written word changing it's meaning, but our brain corrects these failures and moves on without notice either because we aren't following along with the text or we can't be bothered to record the failure. Think about how poorly these models try to represent grammar. I've been told that a post is inappropriate. I've taken the post and rewritten it without changing intent and passed the model's criteria. I had previously selected words that when in proximity to other words were deemed probablistically problematic. A little shift here, a synonym there, a change in grammar from passive to active voice and suddenly the AI isn't aware of my problematic speech. They've trained the AI algorithm on specific writings and authors. Which proves your point, it is censorship.
On instagram I wrote a post to a woman who had given an analysis of a musician. I asked her to give an analysis of The Pointer Sisters because I didn’t think the Black community fully appreciated their contribution. I was told my post would not be sent because I used the word(s) Black community and it was or could be considered offensive.
The AI Large Language models are probabilistically based and incorrectly correct words chosen by human beings to express their experience, feelings, and thoughts. We have listened to AI Generated voices incorrectly enunciate the written word changing it's meaning, but our brain corrects these failures and moves on without notice either because we aren't following along with the text or we can't be bothered to record the failure. Think about how poorly these models try to represent grammar. I've been told that a post is inappropriate. I've taken the post and rewritten it without changing intent and passed the model's criteria. I had previously selected words that when in proximity to other words were deemed probablistically problematic. A little shift here, a synonym there, a change in grammar from passive to active voice and suddenly the AI isn't aware of my problematic speech. They've trained the AI algorithm on specific writings and authors. Which proves your point, it is censorship.
Wow. Well that's a little horrifying.
I see you big brother AI (Or do I?).