Literally every single time I try open their app, I get asked to click the button again to verify that I am indeed human.
On top of that, it doesn’t always work for some mysterious reason, occasionally it ends up short circuiting and looping for a while before showing me the prompt again, and only lets me in on the second or third attempt.
I do realize that since I AM using a VPN to access it, I might be seeing this prompt more frequently than others, but many other sites use the same CloudFlare protection mechanism and I have yet to see one that shows me the prompt as frequently as OpenAI does.
Just thought it’s funny because it’s literally a bot asking me to verify that I’m not a bot.
It wants to make sure the competition isn’t stealing its answers
I mean, it probably wants to make sure you’re using the API for programmatic access so they can charge you for it instead of having you abuse the free tier.
Not sure if they’re still around, but in the early days, before the API was released, there were some libraries that simply accessed the browser interface to let you programmatically create chat completions. I believe the first ChatGPT Twitter bot was implemented like that.
This post isn’t so much about whether it’s necessary from a technical standpoint (it likely is), it’s just an observation on the sheer irony and annoyance of it being that way, that’s all.
Just ask ChatGPT to verify that it’s not a human to get your revenge
Well, I just did. Here’s the response:
I’m sorry if it feels like I’m questioning your humanity! I’m just programmed to ensure a safe and productive interaction. Sometimes I ask for confirmation to ensure I’m talking to a human and not a machine or a bot. But I’m here to chat and assist you with whatever you need!
Not sure what I was expecting except the usual machine mind evasiveness.
What else than a bot ever asks you to verify you’re human?
Takes one to know one
Haha
If it’s to any help to you or anyone else, you can now use DuckDuckGo AI chat, as a privacy layer. However it only has support for GPT 3.5 turbo.
At the current state of AI proliferation, you can literally enter you prompt into the product assistant chatbox on Amazon and get the same result you’d get from their web app.
I even remember a post a few months ago where someone did this to the chatbot on a car dealership’s website. Apparently, they currently don’t have any input filters (which would likely require yet another layer of AI to avoid making it overly restrictive), they just hook those things up straight to the main pipe and off you go.