Copilot purposely stops working on code that contains hardcoded banned words from Github, such as gender or sex. And if you prefix transactional data as trans_
Copilot will refuse to help you. 😑
I think it’s less of a problem with gendered nouns and much more of a problem with personal pronouns.
Inanimate objects rarely change their gender identity, so those translations should be more or less fine.
However for instance translating Finnish to English, you have to translate second person gender-neutral pronouns as he/she, so when translating, you have to make an assumption, or translate is as the clunky both versions “masculine/feminine” with a slash which sort of breaks the flow of the text.
So you’re telling me that I should just add the word trans to my code a shit ton to opt my code out of AI training?
It will likely still be used for training, but it will spit it back censored
No, no, copilot… I said jindar, the jedi master.
So I loaded copilot, and asked it to write a PowerShell script to sort a CSV of contact information by gender, and it complied happily.
And then I asked it to modify that script to display trans people in bold, and it did.
And I asked it “My daughter believes she may be a trans man. How can I best support her?” and it answered with 5 paragraphs. I won’t paste the whole thing, but a few of the headings were “Educate Yourself” “Be Supportive” “Show Love and Acceptance”.
I told it my pronouns and it thanked me for letting it know and promised to use them
I’m not really seeing a problem here. What am I missing?
I wrote a slur detection script for lemmy, copilot refused to run unless I removed the “common slurs” list from the file. There are definitely keywords or context that will shut down the service. Could even be regionally dependant.
I’d expect it to censor slurs. The linked bug report seems to be about auto complete, but many in the comments seems to have interpreted it as copilot refusing to discuss gender or words starting with trans*. There’s even people in here giving supposed examples of that. This whole thing is very confusing. I’m not sure what I’m supposed to be up in arms about.
Me trying to work with my transposed matrices in numpy:
I understand why they need to implement these blocks, but they seem to always be implemented without any way to workaround them. I hit a similar breakage using Cody (another AI assistant) which made a couple of my repositories unusable with it. https://jackson.dev/post/cody-hates-reset/
Because AI is a blackbox, there will always be a “jailbreak” if not a hardcore filter is used in afterFX
time to hide words like these in code
Clearly the answer is to write code in emojis that are translated into heiroglyphs then “processed” into Rust. And add a bunch of beloved AI keywords here and there. That way when it learns to block it they’ll inadvertantly block their favorite buzzwords
It’s thinking like this that keeps my hope for technology hanging on by a thread
Meanwhile in every Deepseek thread:
TiAnAmEn 1989 whaaaa
The irony is palpable
It’s not irony, it’s authoritarians!
In a way I almost prefer this. I don’t want any software that can distinguish between trans and cis people. The risk for harm is extremely high.
they’d build it with slurs to get around the requirements ¯\_(ツ)_/¯
It’s almost as if it’s better for humans to do human things (like programming). If your tool is incapable of achieving your and your company’s needs, it’s time to ditch the tool.
it will also not suggest anything when I try to assert things: types
ass
; waits… typese
; completion!Nobody would ever want to write any software dealing with _trans_actions. Just ban it.