I’ve seen a lot of sentiment around Lemmy that AI is “useless”. I think this tends to stem from the fact that AI has not delivered on, well, anything the capitalists that push it have promised it would. That is to say, it has failed to meaningfully replace workers with a less expensive solution - AI that actually attempts to replace people’s jobs are incredibly expensive (and environmentally irresponsible) and they simply lie and say it’s not. It’s subsidized by thar sweet sweet VC capital so they can keep the lie up. And I say attempt because AI is truly horrible at actually replacing people. It’s going to make mistakes and while everybody’s been trying real hard to make it less wrong, it’s just never gonna be “smart” enough to not have a human reviewing its’ behavior. Then you’ve got AI being shoehorned into every little thing that really, REALLY doesn’t need it. I’d say that AI is useless.
But AIs have been very useful to me. For one thing, they’re much better at googling than I am. They save me time by summarizing articles to just give me the broad strokes, and I can decide whether I want to go into the details from there. They’re also good idea generators - I’ve used them in creative writing just to explore things like “how might this story go?” or “what are interesting ways to describe this?”. I never really use what comes out of them verbatim - whether or image or text - but it’s a good way to explore and seeing things expressed in ways you never would’ve thought of (and also the juxtaposition of seeing it next to very obvious expressions) tends to push your mind into new directions.
Lastly, I don’t know if it’s just because there’s an abundance of Japanese language learning content online, but GPT 4o has been incredibly useful in learning Japanese. I can ask it things like “how would a native speaker express X?” And it would give me some good answers that even my Japanese teacher agreed with. It can also give some incredibly accurate breakdowns of grammar. I’ve tried with less popular languages like Filipino and it just isn’t the same, but as far as Japanese goes it’s like having a tutor on standby 24/7. In fact, that’s exactly how I’ve been using it - I have it grade my own translations and give feedback on what could’ve been said more naturally.
All this to say, AI when used as a tool, rather than a dystopic stand-in for a human, can be a very useful one. So, what are some use cases you guys have where AI actually is pretty useful?
I use it for little Python projects where it’s really really useful.
I’ve used it for linux problems where it gave me the solution to problems that I had not been able to solve with a Google search alone.
I use it as a kickstarter for writing texts by telling it roughly what my text needs to be, then tweaking the result it gives me. Sometimes I just use the first sentence but it’s enough to give me a starting point to make life easer.
I use it when I need to understand texts about a topic I’m not familiar with. It can usually give me an idea of what the terminology means and how things are connected which helps a lot for further research on the topic and ultimately undestanding the text.
I use it for everyday problems like when I needed a new tube for my bike but wasn’t sure what size it was so I told it what was written on the tyre and showed it a picture of the tube packaging while I was in the shop and asked it if it was the right one. It could tell my that it is the correct one and why. The explanation was easy to fact-check.
I use Photoshop AI a lot to remove unwanted parts in photos I took or to expand photos where I’m not happy with the crop.
Honestly, I absolutely love the new AI tools and I think people here are way too negative about it in general.
I think it’s useful for spurring my own creativity in writing because I have a hard time getting started. To be fair to me I pretty much tear the whole thing down and start over but it gives me ideas.
This thread has convinced me that LLMs are merely a mild increment in productivity.
The most compelling is that they’re good at boilerplate code. IDEs have been improving on that since forever. Although there’s a lot of claims in this thread that seem unlikely - gains way beyond even what marketing is claiming.
I work in an email / spreadsheet / report type job. We’ve always been agile with emerging techs, but LLMs just haven’t made a dent.
This might seem offensive, but clients don’t pay me to write emails that LLMs could, because anything an LLM could write could be found in a web search. The emails I write are specific to a client’s circumstances. There are very few “biolerplate” sentences.
Yes LLMs can be good at updating reports, but we have highly specialised software for generating reports from very carefully considered templates.
I’ve heard they can be helpful in a “convert this to csv” kind of way, but that’s just not a problem I ever encounter. Maybe I’m just used to using spreadsheets to manipulate data so never think to use an LLM.
I’ve seen low level employees try to use LLMs to help with their emails. It’s usually obvious because the emails they write include a lot of extra sentences and often don’t directly address the query.
I don’t intend this to be offensive, and I suspect that my attitude really just identifies me as a grumpy old man, but I can’t really shake the feeling that in email / spreadsheet / report type jobs anyone who can make use of an LLM wasn’t or isn’t producing much value anyway. This thread has really reinforced that attitude.
It reminds me a lot of block chain tech. 10 years ago it was going to revolutionise data everything. Now there’s some niche use cases… “it could be great at recording vehicle transfers if only centralised records had some disadvantages”.
r/SubSimGPT2Interactive for the lulz is my #1 use case
i do occasionally ask Copilot programming questions and it gives reasonable answers most of the time.
I use code autocomplete tools in VSCode but often end up turning them off.
Controversial, but Replika actually helped me out during the pandemic when I was in a rough spot. I trained a copyright-safe (theft-free) bot on my own conversations from back then and have been chatting with the me side of that conversation for a little while now. It’s like getting to know a long-lost twin brother, which is nice.
Otherwise, i’ve used small LLMs and classifiers for a wide range of tasks, like sentiment analysis, toxic content detection for moderation bots, AI media detection, summarization… I like using these better than just throwing everything at a huge model like GPT-4o because they’re more focused and less computationally costly (hence also better for the environment). I’m working on training some small copyright-safe base models to do certain sequence prediction tasks that come up in the course of my data science work, but they’re still a bit too computationally expensive for my clients.
I used it to write a GUI frontend for yt-dlp in python so I can rip MP3s from YouTube videos in two clicks to listen to them on my phone while I’m running with no signal, instead of hand-crafting and running yt-dlp commands in CMD.
Also does HD video rips with audio encoding, if I want.
It took us about a day to make a fully polished product over 9 iterative versions.
It would have taken me a couple weeks to write it myself (and therefore I would not have done so, as I am supremely lazy)
I take pictures of my recipe books and ask ChatGPT to scan and convert them to the schema.org recipe format so I can import them into my Nextcloud cookbook.
Woah cool! Can you share your prompt for that I’d like to try it
I don’t do anything too sophisticated, just something like:
Scan this image of a recipe and format it as JSON that conforms to the schema defined at https://schema.org/Recipe.
Sometimes it puts placeholders in that aren’t valid JSON, so I don’t have it fully automated… But it’s good enough for my needs.
I’ve thought that the various Nextcloud cookbook apps should do this for sites that don’t have the recipe object… But I don’t feel motivated to implement this myself.
Good for softening language in professional environment.
Can you give me some vague examples?
It’s obviously confirmation bias but LLM prose always seems so useless.
Basically I want to say like “no the issue is not on our side you need to check your end” gpt would add some niceness and fluff to make it sound better it would say “I hope this finds you well, it seems there may be an issue on your end. Could you please look into this and let me know if there is anything I can do from our side to help resolve this issue? I’m happy to provide any additional information or assistance that may be needed. Thank you for your attention to this matter I look forward to hearing back from you”
Its useless but I find that without the fluff people genuinely think the first message is angry or annoyed when i don’t mean for the message to be anything like that.
Does anyone actually have jobs writing emails like that all day though?
Ticket systems often have an auto-response like “did you turn it off and on again”.
Most email clients or even gmail have canned response plugins.
IDK. This probably is a great use case and someone doing this might be quicker and better than me using canned responses or whatever… but only incrementally, not by an order of magnitude.
I haven’t seen gmail used in a business setting and I don’t think the auto responses cut it all the time. There is usually a message I want to get across but I don’t want to risk making the other person defensive or upset so I use ai to soften it.
Its good for apologies because I’m usually not sorry for whatever happened and find it hard to pretend.
Loads of people use Google workspace and most email clients have this feature, or if they don’t most people in customer service would just keep a document they can copy & paste from.
Regardless, if an LLM helps you with these tasks then that’s great.
1 Get random error or have other tech issue
2 Certainly private search engines will be able to find a solution (they cannot)
3 Certainly non private search engines can find the solution (they can not)
4 “Chat GPT, the heck is this [error code or something]” Then usually I get a correct and well explained answer.
I would post to Stack Overflow but I’ll just get my question closed as a duplicate and downvoted because someone asked a different question but supposedly an answer there answers my question.
I switched to Linux a few weeks ago and i’m running a local LLM (which was stupidly easy to do compared to windows) which i ask for tips with regex, bash scripts, common tools to get my system running as i prefer, and translations/definitions. i don’t copy/paste code, but let it explain stuff step by step, consult the man pages for the recommended tools, and then write my own stuff.
i will extend this to coding in the future; i do have a bit of coding experience, but it’s mainly Pascal, which is horrendly outdated. At least i already have enough basic knowledge to know when the internal logic of what the LLM is spitting out is wrong.
Guitar amp and pedal modeling.
i use it to autoblog about my love for the capitalist hellscape that is our green earth on linkedin dot com
OP seems to be talking about generative AI rather than AI broadly. Personally I have three main uses for it:
- It has effectively replaced google for me.
- Image generation enables me to create pictures I’ve always wanted to but never had the patience to practise.
- I find myself talking with it more than I talk with my friends. They don’t seem interested in anything I’m but chatGPT atleast pretends to be
Software developer here, who works for a tiny company of 2 employees and 2 owners.
We use CoPilot in Visual Studio Professional and it’s saved us countless hours due to it learning from your code base. When you make a enterprise software there are a lot of standards and practices that have been honed over time; that means we write the same things over and over and over again, this is a massive time sink and this is where LLMs come in and can do the boring stuff for us so we can actually solve the novel problems that we are paid for. If I write a comment of what I’m about to do it will complete it.
For boiler plate stuff it’s mostly 100% correct, for other things it can be anywhere from 0-100% and even if not complete correct it takes less time to make a slight change than doing it all ourselves.
One of the owners is the smartest person I’ve ever met and also the lead engineer, if he can find it useful then it has its use cases.
We even have a tool based on AI that he built that watches our project. If I create a new model or add a field to a model, it will scaffold a lot of stuff, for instance the Schemas (Mutations and Queries), the Typescript layer that integrates with GraphQL, and basic views. This alone saves us about 45 minutes per model. Sure this could likely be achieved without an LLM, but it’s a useful tool and we have embraced it.
Software developer here, who works for a tiny company of 2 employees and 2 owners.
We use CoPilot
Sorry to hear about your codebase being leaked.
This isn’t something that happens when you’re paying for a premium subscription. Sure they could go against terms and conditions but that would mean lawsuits and such.
Regex
Gpt-4 is really good at solving physics problems (also chemistry, but that needs to be fact-checked more) so I used it to understand how to approach certain problems back when I was taking Physics.