By challenging AI chatbots to judge thousands of moral dilemmas posted in a popular Reddit forum, UC Berkeley researchers revealed that each platform follows its own set of ethics.
Right? Why the hell would anyone think this? There are a lot of articles lately like “is AI alive?” Please, it’s 2025 and it can hardly do autocomplete correctly.
No, they do what they’ve been programmed to do because they’re inanimate
A better headline would be that they analyzed the embedded morals in the training data… but that would be far less click bait…
They’ve created a dilemma for themselves cos I won’t click on anything with a clickbait title
Right? Why the hell would anyone think this? There are a lot of articles lately like “is AI alive?” Please, it’s 2025 and it can hardly do autocomplete correctly.