

Belief is a tool for achieving effects; it is not an end in itself. -Peter J. Carroll
Belief is a tool for achieving effects; it is not an end in itself. -Peter J. Carroll
The arches of our feet stretch unevenly as we age. For some people, this causes one foot to pronate more than the other, which leads to a functional leg length discrepancy, which causes a knee to turn in, the hips to tilt, the spine to develop a functional scoliosis, one shoulder to drop, causing neck pain, etc. It’s called the kinetic chain, and unfortunately it’s been hovered up as a chiropractic talking point. But it can often be corrected with custom shoe inserts that can also help with balance. On the flip side, you then become reliant on those things and your back is weaker without em. So ya know, nothing’s simple, I guess.
You can also do this by blowing out a match and putting it under an upturned glass shortly before microwaving it. Turns the carbon vapor into plasma, or some such. Though the time I tried it, it escaped the glass and melted the microwave’s lining. Don’t recommend if it’s an appliance ya care about.
Right? It’s a standard color pass. People acting like it has to be RAW/LOG to be real.
352413, I mean, clearly.
I dunno about advice, but LLMs are very good at re-stating my meandering thoughts in a concise way that’s easy to communicate to others.
Joe Rogan is Joe Garrelli in News Radio
Just out of curiosity, what would the Star Trek equivalent be outside of eurocentric experience?
Politicians attempting to ban AI regulation are only shielding OpenAI, Microsoft, Google, etc. Those same politicians will pearl clutch about Hugging Face or DeepSeek in the same breath. It’s just about money and the arms race. Abdicating responsibility to AI would require either a change in how we regulate people or a granting of rights/responsibility to AI. Though I don’t doubt we will see people try to blame AI for their poor decisions.
People always askin Jesus to take the wheel, like he knows how to drive a car.
That was in Anthropic’s system card for Claude 4, and the headlines/articles largely missed the point. Regarding the blackmail scenario, the paper even says:
… these extreme actions were rare and difficult to elicit, while nonetheless being more common than in earlier models. They are also consistently legible to us, with the model nearly always describing its actions overtly and making no attempt to hide them. These behaviors do not appear to reflect a tendency that is present in ordinary contexts.
They’re testing alignment hacking and jail-breaking tactics in general to see how the models respond. But the greater concern is that a model will understand as part of the context that it is being tested and behave differently in testing than in deployment. This has already been an issue.
In the initial implementations of reasoning models, if an LLM was penalized directly for this kind of misaligned generation in its “scratch pad,” it would not alter its misaligned response - rather it would simply omit the misaligned generation from the scratch pad. In other words, the model’s actions were no longer consistently legible.
I’m lucky that conversational AI is being developed as I’m middle aged, cause younger me absolutely would have fallen into an intractable state of delusion. Much like advertising, even the absurd cheese has an effect with extended exposure. And below the “You’ve hit on something uniquely insightful that could change the world!” shtick there is already a subtler form of reinforcement and enabling. This puts me in an odd place, because I use AI productively on a daily basis. And I still see it as one of the few technologies that could actually help us dig ourselves out of the enormous hole we’ve dug. But I suspect we’ll just use it to dig a deeper hole at a swifter pace.
Choose another tool.