If you’ve watched any Olympics coverage this week, you’ve likely been confronted with an ad for Google’s Gemini AI called “Dear Sydney.” In it, a proud father seeks help writing a letter on behalf of his daughter, who is an aspiring runner and superfan of world-record-holding hurdler Sydney McLaughlin-Levrone.
“I’m pretty good with words, but this has to be just right,” the father intones before asking Gemini to “Help my daughter write a letter telling Sydney how inspiring she is…” Gemini dutifully responds with a draft letter in which the LLM tells the runner, on behalf of the daughter, that she wants to be “just like you.”
I think the most offensive thing about the ad is what it implies about the kinds of human tasks Google sees AI replacing. Rather than using LLMs to automate tedious busywork or difficult research questions, “Dear Sydney” presents a world where Gemini can help us offload a heartwarming shared moment of connection with our children.
Inserting Gemini into a child’s heartfelt request for parental help makes it seem like the parent in question is offloading their responsibilities to a computer in the coldest, most sterile way possible. More than that, it comes across as an attempt to avoid an opportunity to bond with a child over a shared interest in a creative way.
Reminds me of the movie Her, where all kinds of heartfelt letters were outsourced to professional agencies.
The obvious missing element is another AI on Sydney’s end to summarize all the fan mail into a one-number sentiment score. At that point we can eliminate both the AIs and the mental effort, and just send each other single numbers via an ad-sponsored Google service.
Hey, my buddy’s work is already doing that! Management no longer has any idea what the company does, but they know how often you click. It boils down to a decimal number, which is what they really need. Higher numbers are better.
It would’ve been cooler if they used it to write a cool PDF page of info and stats on Sydney McLaughlin-Levrone
Or finding/buying plane tickets at the best price by searching all the sites
But that would imply that it can be relied upon for accuracy.
This and the Nike ad have been the worst ads during the Olympics.
This is one of the weirdest of several weird things about the people who are marketing AI right now
I went to ChatGPT right now and one of the auto prompts it has is “Message to comfort a friend”
If I was in some sort of distress and someone sent me a comforting message and I later found out they had ChatGPT write the message for them I think I would abandon the friendship as a pointless endeavor
What world do these people live in where they’re like “I wish AI would write meaningful messages to my friends for me, so I didn’t have to”
If I was in some sort of distress and someone sent me a comforting message and I later found out they had ChatGPT write the message for them I think I would abandon the friendship as a pointless endeavor
My initial response is the same as yours, but I wonder… If the intent was to comfort you and the effect was to comfort you, wasn’t the message effective? How is it different from using a cell phone to get a reminder about a friend’s birthday rather than memorizing when the birthday is?
One problem that both the AI message and the birthday reminder have is that they don’t require much effort. People apparently appreciate having effort expended on their behalf even if it doesn’t create any useful result. This is why I’m currently making a two-hour round trip to bring a birthday cake to my friend instead of simply telling her to pick the one she wants, have it delivered, and bill me. (She has covid so we can’t celebrate together.) I did make the mistake of telling my friend that I had a reminder in my phone for this, so now she knows I didn’t expend the effort to memorize the date.
Another problem that only the AI message has is that it doesn’t contain information that the receiver wants to know, which is the specific mental state of the sender rather than just the presence of an intent to comfort. Presumably if the receiver wanted a message from an AI, she would have asked the AI for it herself.
Anyway, those are my Asperger’s musings. The next time a friend needs comforting, I will tell her “I wish you well. Ask an AI for inspirational messages appropriate for these circumstances.”
Ask an AI for inspirational messages appropriate for these circumstances.
Don’t need to ask an AI when every website is AI-generated blogspam these days
Another problem that only the AI message has is that it doesn’t contain information that the receiver wants to know, which is the specific mental state of the sender rather than just the presence of an intent to comfort.
I don’t think the recipient wants to know the specific mental state of the sender. Presumably, the person is already dealing with a lot, and it’s unlikely they’re spending much time wondering what friends not going through it are thinking about. Grief and stress tend to be kind of self-centering that way.
The intent to comfort is the important part. That’s why the suggestion of “I don’t know what to say, but I’m here for you” can actually be an effective thing to say in these situations.
These seem like people who treat relationships like a game or an obligation instead of really wanting to know the person.
The article makes a mention of the early part of the movie Her, where he’s writing a heartfelt, personal card that turns out to be his job, writing from one stranger to another. That reference was exactly on target: I think most of us thought outsourcing such a thing was a completely bizarre idea, and it is. It’s maybe even worse if you’re not even outsourcing to someone with emotions but to an AI.
deleted by creator
Honestly they could have avoided this by asking the daughter to input some legitimate sentiments and had AI help her express them.
Instead they offload the task entirely, removing any thought or sense of legitimacy.
Well said.
So in the spring I got a letter from a student telling me how much they appreciate me as a teacher. At the time I was going through some s***. Still am frankly. So it meant a lot to me.That was such a nice letter.
I read it again the next day and realized it was too perfect. Some of the phrasing just didn’t make sense for a high school student. Some of the punctuation.
I have no doubt the student was sincere in their appreciation for me, But once I realized what they had done It cheapened those happy feelings. Blah.
… But why did it cheapen it when they’re the one that sent it to you? Because someone helped them write it, somehow the meaning is meaningless?
That seems positively callous in the worst possible way.
It’s needless fear mongering because it doesn’t count because of arbitrary reason since it’s not how we used to do things in the good old days.
No encyclopedia references… No using the internet… No using Wikipedia… No quoting since language and experience isn’t somehow shared and built on the shoulders of the previous generations with LLMs being the equivalent of a literal human reference dictionary that people want to say but can’t recall themselves or simply want to save time in a world where time is more precious than almost anything lol.
The only reason anyone shouldn’t like AI is due to the power draw. And nearly every AI company is investing more in renewables than anyone everyone else while pretending like data centers are the bane of existence while they write on Lemmy watching YouTube and playing an online game lol.
David Joyner in his article On Artificial Intelligence and Authenticity gives an excellent example on how AI can cheapen the meaning of the gift: the thought and effort that goes into it.
In the opening synchronous meeting for one such class this semester, I was asked about this policy: if the work itself is the same, what does it matter whether it came from AI or not? I explained my thoughts with an analogy: imagine you have an assistant, whether that is an executive assistant at work or a family assistant at home or anyone else whose professional role is helping you with your role. Then, imagine your child’s (or spouse’s, I actually can’t remember which example I used in class) birthday is coming up. You could go out and shop for a present yourself, but you’re busy, so you ask this assistant to go pick out something. If your child found out that your assistant picked out the gift instead of you, would we consider it reasonable for them to be disappointed, even if the gift itself is identical to the one you would have purchased?
My class (those that spoke up, at least) generally agreed yes, it would be reasonable to expect the child to be disappointed: the gift is intended to represent more than just its inherent usefulness and value, but also the thought and effort that went into obtaining it. I continued the analogy by asking: now imagine if the gift was instead a prize selected for an employee-of-the-month sort of program. Would it be as disappointing for the assistant to buy it in that case? Likely not: in that situation, the gift’s value is more direct.
The assistant parallel is an interesting one, and I think that comes out in how I use LLMs as well. I’d never ask an assistant to both choose and get a present for someone; but I could see myself asking them to buy a gift I’d chosen. Or maybe even do some research on a particular kind of gift (as an example, looking through my gift ideas list I have “lightweight step stool” for a family member. I’d love to outsource the research to come up with a few examples of what’s on the market, then choose from those.). The idea is mine, the ultimate decision would be mine, but some of the busy work to get there was outsourced.
Last year I also wrote thank you letters to everyone on my team for Associate Appreciation Day with the help of an LLM. I’m obsessive about my writing, and I know if I’d done that activity from scratch, it would have easily taken me 4 hours. I cut it down to about 1.5hrs by starting with a prompt like, “Write an appreciation note in first person to an associate who…” then provided a bulleted list of accomplishments of theirs. It provided a first draft and I modified greatly from there, bouncing things off the LLM for support.
One associate was underperforming, and I had the LLM help me be “less effusive” and to “praise her effort” more than her results so I wasn’t sending a message that conflicted with her recent review. I would have spent hours finding the right ways of doing that on my own, but it got me there in a couple exchanges. It also helped me find synonyms.
In the end, the note was so heavily edited by me that it was in my voice. And as I said, it still took me ~1.5 hours to do for just the three people who reported to me at the time. So, like in the gift-giving example, the idea was mine, the choice was mine, but I outsourced some of the drafting and editing busy work.
IMO, LLMs are best when used to simplify or support you doing a task, not to replace you doing them.
This is exactly how I view LLMs and have used them before.
These people in these scenarios aren’t going ‘Amazon buy my gf a gift she likes.’
They’re going, please write a letter to my professor thanking them for their help and all they’ve done for me in biology.
I don’t know of anyone who trusts AI enough to just carte blanche fire off emails immediately after getting prompts back either.
The fear and cheapening of AI is the same fear and cheapening as every other advancement in technology.
-
It’s not a a real conversation unless you talk face to face like a man
say it in a groupwrite it on parchment and inkpen and papertypewritertelegramphonecalltext messagefaxemail. E: rip strikethroughs? -
It’s not a real paper if it’s a meta analysis.
-
It’s not it’s not it’s not.
All for arbitrary reasons that people have used to offset mundane garden levels of tedium or just outright ableist in some circumstances.
People also seriously overestimate their ability to detect AI writing or even pictures. That dude may very well have gotten a sincere letter without AI but they’ve already set it in their mind that the student wrote it with AI as if they know this student so well from 10 written assignments they probably don’t care about to 1 potentially sincerely written statement to them.
If people like that think it cheapens the value, that’s on them. People go on and on about removing pointless platitudes and dumb culturally ingrained shit but then clutch their pearls the moment one person toes outside the in-group.
It just feels so silly to me.
IT’S NOT ART UNLESS IT’S OIL ON CANVAS levels of dumb.
It’s not altruistic/good-natured unless you don’t benefit from it in any way and feel no emotion by doing it! You can’t help the homeless unless you follow the rules! You can’t give them money if you record it.
In the end, they still got that money. But somehow it devalues it because instead of raising two people up higher, you only raised one? It’s foolishness.
People also seriously overestimate other’s abilities and cheapen what their time is worth all the damn time.
-
I’m curious, if they had gone to their parent, gave them the same info, and come to the same message… would it have been less cheap feeling?
And do you know that isn’t the case? “Hey mom, I’m trying to write something nice to my teacher, this is what I have but it feels weird can you make a suggestion?” Is a perfectly reasonable thing to have happened.
I think there’s a different amount of effort involved in the two scenarios and that does matter. In your example, the kid has already drafted the letter and adding in a parent will make it take longer and involve more effort. I think the assumption is they didn’t go to AI with a draft letter but had it spit one out with a much easier to create prompt.
You should’ve asked Gemini what to feel about it and how to response…
That’s the problem with how they are doing it, everyone seems to want AI to do everything, everywhere.
It is now getting on my own nerves, because more and more customers want to have somehow AI integrated in their websites, even when they don’t have a use for it.
We created a society of antisocial people who are maximized as efficient working machines to the point of drugging the ones that are struggling with it.
Of course they want AI to do it for them and end human interactions. It’s simpler that way.
It’s 2027, the AI killer app never came, but LLMification has produced an unimaginable glut of mediocre media and the most popular AI application is to use it to find human sourced material.
The stock market is like a ship on fire, but you can buy video cards for pennies on the dollar.
Thank you! The ads from everywhere this Olympics have been so fucking weird. I even started a thread on mastodon and this ad was on it. https://hachyderm.io/@ch00f/112861965493613935
They were always weird but it is getting to the point where even normies are taking notice.
All that sex traffic that occurs for their event alone make it an abomination.
My best friend, the Uber driver, which I prefer to shut up all the way home. But hey, what are friends for, he keeps me hydrated!
This! I was appalled when this ad played, suggesting that ANYONE comes out of that fictional scenario pleased is ridiculous. No one wants to receive a crappy AI-written email, ESPECIALLY when the primary topic is emotional. Using an LLM to write a message for a loved one tells everyone that you don’t actually care enough to write it yourself. And Google is putting their big check of approval on the whole scenario saying, “This is what we want you to use Gemini for.” Absolutely abysmal.
The ONLY version of this ad that makes any sense is if the parent writing the email is illiterate or has a medical issue where they can’t type. But I’d rather see them use AI to make dictation better and more powerful instead.
We’re all switching to Kagi Search and moving our email to ProtonMail or the like right? I don’t need this kind of crap in my digital tool kit.
Hate to say it, but Kagi is not great. Both in results and in stewardship.
Proton recently introduced an AI “writing assistant” for emails called Scribe and a bitcoin wallet sadly.
This ad is on purpose, to make us believe that using AI like this is the most normal thing. It’s kind of brainwashing. So they can sell it to us.
“This message really needs to be passionate and demonstrate my emotional investment, I’d better have a text generation algorithm do it for me”
I’ve been watching quite a lot of Olympics coverage on TV, but never seen any ads. Is there an official Olympics TV channel with these ads?
Glad to see others have also keyed in on just how lame this ad was.
My immediate thought was, if you (the guy doing the voiceover as the father) are so mentally deficient that you can’t even put together a four sentence paragraph of your own original thoughts for fanmail, then what hope do you have of doing anything else as a functioning adult?
Worse yet, what does this teach the kid?
It teaches the kid to rely more and more on AI for everything, just like Google wants.
They’re already ‘thanking’ siri and alexa, this will be a very dangerous development.
Thanking a personified character doesn’t strike me as a bad thing.
Surely theres a more positive perspective where people are just naturally polite in their words and would struggle to communicate differently to a language bot.
It’s pretty frustrating how the venn diagram of ‘people who treat people like things’, and ‘people who treat things like people’ is a near circle.
Roko’s basilisk is kind of bullshit but the meme is funny.
rokos basilisk is the most stupidest thing and I hate it so much. it’s so obviously just plain wrong. it’s just wrong. it’s not even an interpretation thing. most stupidest and insane and useless idea ever.
edit: I’m still mad at that one YouTuber that did a video about rokos basilisk pretending it made even a little bit of sense.
It’s creepypasta for edgelad nerds
It should be like a core memory for the kid to do this with her dad. It’s like having an LLM to play catch or do tea parties with her.
You… you joke, but I know a few parents who would absolutely fail at something like this. Hell, they fail at basic math, and are barely literate.
I’m not saying this is a great idea for everyone, or that the ad is good. But the idea that “no one needs this” is extremely short sighted. For god sakes, the literacy rate in America alone isn’t even 95%, and over 50% of Americans aren’t proficient in English.
Again. This ad sucks for lots of reasons. But don’t pretend idiots can’t make it through adulthood, never mind become parents. The idiots are usually the ones with the most kids.
AI systems are still a useful tool.
Wow, this is an unfair take and very judgemental. I can think of a dozen reasons why an adult might have trouble writing a letter aside from being “mentally deficient.” Dyslexia, anxiety, poor education, not being a native speaker, ADHD, etc.
Trust me, I thought the ad was lame and a bleak use case for AI, but you don’t have to crucify a parent for doing their best to help their kid.
lists mental deficiencies
Dyslexia, anxiety, poor education, not being a native speaker, ADHD, etc.
That “etc.” certainly includes living in an anti-intellectual society full of emotionally stunted people who learned that men shouldn’t care about feelings and that reading is for dorks.
I think AI is great, but not for this. It’s much better suited for, say, stuff like AI dungeon, or other entertainment (DougDoug on twitch/YouTube is the perfect example).