• bloodfart@lemmy.ml
    link
    fedilink
    arrow-up
    81
    arrow-down
    1
    ·
    4 months ago

    Do not use ai for plant identification if it actually matters what the plant is.

    Just so ppl see this:

    DO NOT EVER USE AI FOR PLANT IDENTIFICATION IN CASES WHERE THERE ARE CONSEQUENCES TO FAILURE.

    For walking along and seeing what something is, that’s fine. No big deal if it tells you something’s a turkey oak when it’s actually a pin oak.

    If you’re gonna eat it or think it might be toxic or poisonous to you, if you want to find out what your pet or livestock ate, if you in any way could suffer consequences from misidentification: do not rely on ai.

    • merc@sh.itjust.works
      link
      fedilink
      arrow-up
      27
      arrow-down
      1
      ·
      4 months ago

      You could say the same about a plant identification book.

      It’s not so much that AI for plant identification is bad, it’s that the higher the stakes, the more confident you need to be. Personally, I’m not going foraging for mushrooms with either an AI-based plant app or a book. Destroying Angel mushrooms look pretty similar to common edible mushrooms, and the key differences can disappear depending on the circumstances. If you accidentally eat a destroying angel mushroom, the symptoms might not appear for 5 to 24 hours, and by then it’s too late. Your liver and kidney are already destroyed.

      But, I think you could design an app to be at least as good as a book. I don’t know if normal apps do this, but if I made a plant identification app, I’d have the app identify the plant, and then provide a checklist for the user to use to confirm it for themselves. If you did that, it would be just like having a friend just suggest checking out a certain page in a plant identification book.

      • medgremlin@midwest.social
        link
        fedilink
        arrow-up
        13
        ·
        4 months ago

        The problem with AI is that it’s garbage in, garbage out. There’s some AI generated books on Amazon now for mushroom identification and they contain some pretty serious errors. If you find a book written by an actual mycologist that has been well curated and referenced, that’s going to be an actually reliable resource.

        • Sconrad122@lemmy.world
          link
          fedilink
          arrow-up
          5
          ·
          4 months ago

          Are you assuming that AI in this case is some form of generative AI? I would not ask chatgpt if a mushroom is poisonous. But I would consider using a convolutional neural net based plant identification software. At that point you are depending on the quality of the training data set for the CNN and the rigor put into validating the trained model, which is at least somewhat comparable to depending on a plant identification book to be sufficiently accurate/thorough, vs depending on the accuracy of a story that genAI makes up based on reddit threads, which is a much less advisable venture

          • medgremlin@midwest.social
            link
            fedilink
            arrow-up
            1
            ·
            4 months ago

            The books on Amazon are vomited out of chat GPT. If there’s a university-curated and trained image recognition AI, that’s more likely to be reliable provided the input has been properly vetted and sanitized.

      • Classy@sh.itjust.works
        link
        fedilink
        arrow-up
        7
        arrow-down
        1
        ·
        4 months ago

        If you’re using the book correctly, you couldn’t say the same thing. Using a flora book to identify a plant requires learning about morphology and by having that alone you’re already significantly closer to accurately identifying most things. If a dichotomous key tells you that the terminating leaflet is sessile vs. not sessile, and you’re actually looking at that on the physical plant, your quality of observation is so much better than just photographing a plant and throwing it up on inaturalist

        • Bytemeister@lemmy.world
          link
          fedilink
          Ελληνικά
          arrow-up
          5
          ·
          4 months ago

          Not to mention, the book is probably going to list look-alike plants, and mention if they are toxic. AI is just going to go “It’s this thing”.

        • Iceman@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          You can easily say the same thing. Use the image identification to get a name of the plant and google it to read about checking if the sessile is leafy or no.

      • bloodfart@lemmy.ml
        link
        fedilink
        arrow-up
        5
        arrow-down
        1
        ·
        4 months ago

        The difference between a reference guide intended for plant identification written and edited by experts in the field for the purposes of helping a person understand the plants around them and the ai is that one is expressly and intentionally created with its goal in mind and at multiple points had knowledgeable skilled people looking over its answer and the other is complex mad libs.

        I get that it’s bad to gamble with your life when the stakes are high, but we’re talking about the difference between putting it on red and putting it on 36.

        One has a much, much higher potential for catastrophe.

    • Fizz@lemmy.nz
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      Forgo identification and eat the plant based on vibes like our ancestors.

    • masterspace@lemmy.ca
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      4 months ago

      Like I get what you’re saying but this is also hysterical to the point that people are going to ignore you.

      Don’t use AI ever if there are consequences? Like I can’t use an AI image search to get rough ideas of what the plant might be as a jumping off point into more thorough research? Don’t rely solely on AI, sure, but it can be part of the process.

  • Count Regal Inkwell@pawb.social
    link
    fedilink
    arrow-up
    50
    arrow-down
    1
    ·
    4 months ago

    The blanket term “AI” has set us back quite a lot I think.

    The plant thing and the deepfakes/search engines/chatbots are two entirely different types of machine learning algorithm. One focussed on distinguishing between things, the other focussed on generating stuff.

    But “AI” is the marketable term, and the only one most people know. And so here we are.

      • Count Regal Inkwell@pawb.social
        link
        fedilink
        arrow-up
        12
        ·
        4 months ago

        I particularly “Love” that a bunch of like, procedural generation and search things that have existed for years are now calling themselves “AI” (without having changed in any way) because marketing.

        • Focal@pawb.social
          link
          fedilink
          arrow-up
          12
          ·
          4 months ago

          Reminds me of how everything on a computer used to be a “program”, but now they’re all just “apps”

        • smokebuddy [he/him]@lemmy.today
          link
          fedilink
          arrow-up
          4
          ·
          edit-2
          4 months ago

          I read a story on CBC the other day that was all about how an AI voice was taking over from hosts on off-hours at some local radio station, then deeper in the article it revealed that everything the “AI” reads was written by a human. So it was about someone using text-to-speech technology that has been around since at least the 70s the whole time. Hardly newsworthy in any way except for “IT’S AI!”

          • Count Regal Inkwell@pawb.social
            link
            fedilink
            arrow-up
            2
            ·
            4 months ago

            Mind you there -are- TTS tools that use machine learning (which is what advertisers call “AI” now) for more realistic voices. No idea if the radio was using those at all though.

      • ByteOnBikes@slrpnk.net
        link
        fedilink
        arrow-up
        4
        ·
        4 months ago

        Oh man this one drives me up the wall too.

        Someone literally with a straight face said how cool Minecraft has AI generated worlds and I wanted to flip a table.

    • angrystego@lemmy.world
      link
      fedilink
      arrow-up
      9
      ·
      4 months ago

      You’re talking about types of machine learning algorithms. Is that a more precise term that should be used here instead of AI? And would the meme work better if it wss used. I’m asking, because I really don’t understand these things.

      • Count Regal Inkwell@pawb.social
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        There are proper words for them, but they are ~technical jargon~. It is sufficient to know that they are different types of algorithm, only really similar in that both use machine learning.

        And would the meme work better if it wss used

        No because it is a meme, and if people had learned the proper words for things, we wouldn’t need a meme at all.

        • masterspace@lemmy.ca
          link
          fedilink
          English
          arrow-up
          2
          ·
          4 months ago

          Both use machine learning algorithms that are modelled off the behaviour of neurons.

          They are still different algorithms but they’re not that wildly different in the grand scale of the field of machine learning.

      • 31337@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        4 months ago

        Likely transformers now (I think SD3 uses a ViT for text encoding, and ViTs are currently one of the best model architectures for image classification).

    • ricecake@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      It’s particularly annoying because those are all AI. AI is the blanket term for the entire category of systems that are man made and exhibit some aspect of intelligence.

      So the marketing term isn’t wrong, but referring to everything by it’s most general category is error prone and makes people who know or work with the differences particularly frustrated.
      It’s easier to say “I made a little AI that learned how I like my tea”, but then people think of something that writes full sentences and tells me to put dogs in my tea. “I made a little machine learning based optimization engine that learned how I like my tea” conveys it much less well.

  • drail@fedia.io
    link
    fedilink
    arrow-up
    34
    arrow-down
    1
    ·
    4 months ago

    I am a physicist. I am good at math, okay at programming, and not the best at using programming to accomplish the math. Using AI to help turn the math in my brain into functional code is a godsend in terms of speed, as it will usually save me a ton of time even if the code it returns isn’t 100% correct on the first attempt. I can usually take it the rest of the way after the basis is created. It is also great when used to check spelling/punctuation/grammar (so using it like the glorified spellcheck it is) and formatting markup languages like LaTeX.

    I just wish everyone would use it to make their lives easier, not make other people’s lives harder, which seems to be the way it is heading.

    • Domi@lemmy.secnd.me
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      Also works well for the opposite use case.

      I’m a good programmer but bad at math and can never remember which algorithms to use so I just ask it how to solve problem X or calculate Y and it gives me a list of algorithms which would make sense.

    • webhead@lemmy.world
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      Yeah I’ve been using it to help my novice ass code stuff for my website and it’s been incredible. There’s some stuff I thought yeah I’m probably never gonna get around to this that I rocketed through in an AFTERNOON. That’s what I want AI for. Not shitty customer service.

    • Reyali@lemm.ee
      link
      fedilink
      arrow-up
      1
      ·
      4 months ago

      Great examples. The most valuable use for me has been writing SQL queries. SQL is not a part of my job description, but data informs choices I make. I used to have to ask a developer on my team all the questions I had and pull them off their core work to get answers for me, then I had to guess at interpreting the data and inevitably bug them again with all my follow-up questions.

      I convinced the manager to get me read access to the databases. I can now do that stuff myself. I had very basic understanding of SQL before, enough to navigate the tables and make some sense of reading queries, but writing queries would have taken HOURS of learning.

      As it is, I type in basics about the table structure and ask my questions. It spits out queries, and I run them and tweak as needed. Without AI, I probably would have used my SQL access twice in the past year and been annoyed at how little I was able to get, but as it is I’ve used it dozens of times and been able to make better informed decisions because of it.

  • Gestrid@lemmy.ca
    link
    fedilink
    English
    arrow-up
    22
    ·
    4 months ago

    I’ve had to literally perform a Google search to find a customer support phone number before. Because the website of the company just kept redirecting me in circles.

    Their phone support was just as useless, though.

    It was GameStop, by the way.

  • hakunawazo@lemmy.world
    link
    fedilink
    arrow-up
    16
    ·
    4 months ago

    We need to strike back with an AI customer which alerts us if we could finally talk or chat again with a human if all automatic solutions are discussed.

  • leadore@lemmy.world
    link
    fedilink
    arrow-up
    17
    arrow-down
    1
    ·
    edit-2
    4 months ago

    Using it for plant identification is fine as long as it’s an AI designed/trained for plant ID (even then don’t use it to decide if you can eat it). Just don’t use an LLM for plant ID, or for anything else relating to actual reality. LLMs are only for generating plausible-sounding strings of text, not for facts or accurate info.

    • ByteOnBikes@slrpnk.net
      link
      fedilink
      arrow-up
      2
      ·
      4 months ago

      Mushroom identification is extra flaky.

      Expert level mushroom identification is a skill and I wouldn’t recommend anyone use those apps and assume a mushroom is correct. (And especially don’t eat it without absolutely verifying)

      Plant identification - not concerned. Most normal people aren’t ingesting a plant in the wild. And I mean if you’re rubbing against a plant and get a reaction, ideally that’s a lesson you only learn once.

  • iAvicenna@lemmy.world
    link
    fedilink
    arrow-up
    11
    arrow-down
    2
    ·
    edit-2
    4 months ago

    hmm guess which one also doesn’t suck the energy equivalent of a sizeable town

    • i_love_FFT@lemmy.ml
      link
      fedilink
      arrow-up
      4
      ·
      4 months ago

      I’d like all ai service to publish the energy used in training the model and performing inference.

      “Queries uses an average of X kWh of power. A model training run requires X MWh, and the development of this model over the years required X TWh of power.”

      Then we could judge companies by that metric. Off course, rich people would look for the most power-draining model for the sake of it.

      • FrenchThrowAway@jlai.lu
        link
        fedilink
        arrow-up
        5
        ·
        edit-2
        4 months ago

        That’s already something that Meta is doing for their Llama models:

        Source

        You can extrapolate openai models consumption from these I guess

        • Skullgrid@lemmy.world
          link
          fedilink
          arrow-up
          6
          ·
          4 months ago

          ok, but

          1. Is it still bad if they use renewables? in which case, it’s not horrendous, is it?

          2. what about the rest of their servers?

          3. Fuck facebook

          • iAvicenna@lemmy.world
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            4 months ago

            If we are abundant in renewable energy no but if we are still at a level where available renewable energy can be used to replace non-renewable, then AI tech needs to justify its use cases too.

            and servers yes, social media related data center energy consumption should be put under heavy scrutiny too. Especially considering some energy hungry social media platforms like facebook are lately causing more harm than good (on other fronts such as political propaganda and racism). I doubt any of these are gonna happen soon though since many governments are heavily invested in using social media and LLM chatbots for propaganda and surveillance.

            • masterspace@lemmy.ca
              link
              fedilink
              English
              arrow-up
              1
              ·
              4 months ago

              We already have a way for society to decide what is and isn’t worth spending power and effort on and it’s called money.

              Increase carbon taxes to incentivize clean fuel sources and ban predatory advertising and data tracking behaviour because it’s problematic.

              We do not need to setup a separate shadow economy to gatekeep what is and isn’t worth spending eoectricity on.

          • FrenchThrowAway@jlai.lu
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            4 months ago
            1. Power consumption is still power consumption, so 2 290 000kgCo2 is a lot, even if it’s way lower than what it would have been with coal plants
            2. They only talk about power consumption and not server hardware footprint, cause power consumption is the easier to offset
            3. Yes
      • howrar@lemmy.ca
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        4 months ago

        development of this model over the years required X TWh of power

        This part is kind of hard to measure. When do you start counting? From the first work that informed the research direction eventually leading to this model? From the point where the concept of this final model first came about? Do you split the energy usage between multiple models that came from the same work?

    • ReCursing@lemmings.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      4 months ago

      That’s something of a red herring. The source of that energy matters more than how much is used (use renewables where possible) - your ire is directed at entirely the wrong place; and also how much is used in computers and datacentres doing other stuff? If I’m generating pictures I’m not playing games, which is using the same card and probably more constantly.

      I gotta congratulate you though, that’s an argument that to my knowledge was NOT levelled against photography when that was invented. I mean like all the other arguments it’s bollocks but at least it’s new! <pretty much every other argument against ai art was levelled at photography and many of therm at pre-mixed paints before that!>

    • daniskarma@lemmy.dbzer0.com
      link
      fedilink
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      4 months ago

      Going for a hike, seeing a nice plant and saying: I wonder what this plant is. And most of the time getting a correct answer.

      If people is stupid enough to eat wild things based on any kind of unprofessional identification it may be just proving that Darwin was onto something.

      • tobogganablaze@lemmus.org
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        4 months ago

        Right, so when you need an ID but don’t really care about whether it’s correct, AI is great I guess. Not sure what the point is though.

        But if you actually want a proper ID, I’d stay far away from AI and go to community with experts.

        • daniskarma@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          3
          ·
          edit-2
          4 months ago

          I have use it a lot while hiking and mostly got correct results for tree or small planta identification to satisfy my curiosity. Good enough for me. I’m not calling the National Center for Botanics neither hiring a professional botanic for 2000€/hour just to satisfy my curiosity while hiking.

          It has its use cases. I would trust it as much as those old plant books for amateurs I used to have. I would also got incorrect identifications out of those due my lack of expertise, more so that with the AI I use nowadays.

  • Binzy_Boi@piefed.social
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 months ago

    Funny seeing this around now.

    Was looking into trying to find an AI to make stories from images, since I have to deal with the unfortunate reality that for a fandom I like, just about all the fanfic is unbelievably badly written to the point that an AI does a better job making interesting stories. I know they exist, just a question of where the ones that work are.

    Simple ask you’d think wanting to find shit that generates stories from images. Search engines hardly helped, so it was like, fine, I’ll ask an AI about AI. Surely it’ll help me find the tool I need, right?

    Somehow the results it gave me were worse than the search engine itself.

    • mm_maybe@sh.itjust.works
      link
      fedilink
      arrow-up
      3
      ·
      4 months ago

      Isn’t GPT-4o (the multimodal model currently offered by OpenAI) supposed to be able to do things like this?

      Don’t get me wrong, I think you would be better served by taking this as a fun exercise to develop your imagination and writing skills. But since it’s fanfic and presumably for personal, non-commercial purposes I would consider what you want to do to be a fair and generally ethical use of the free version of ChatGPT…

  • Flying Squid@lemmy.world
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    4 months ago

    There is one good use for AI and we have found it on a forum I frequent.

    That use is putting Godzilla in ridiculous situations.

  • Uncle_Abbie@lemmy.today
    link
    fedilink
    arrow-up
    4
    ·
    4 months ago

    I got my first AI telemarketer, and that sucked.

    I realized pretty quickly that it wasn’t a real person, but my elderly neighbor didn’t. She told me how bad she felt just hanging up on this “person,” but she just couldn’t get them off the line. (I told her I had the same experience, and I’ll warn her about AI Robots at a different time- I didn’t want to make her feel foolish.)

  • esc27@lemmy.world
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    4 months ago

    I’m still hoping for good customer support AI. If I’m going to be connected to someone who barely speaks English and is required to follow a prewritten script, or worse plays prerecorded messages to fake being fluent, I might as well talk to an AI, especially if it means shorter hold times.

    AI is a bad replacement for good customer service, but it could be an improvement over bad customer service.

    • brbposting@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 months ago

      Glad you posted this, b/c I now have a follow up to a previous comment where I shared this from Klarna (amongst other tidbits):

      So Klarna automated L1 support, did a good job at it, and saved money. Apparently they could’ve done it early without LLMs and saved even more money.

      Have you ever wanted L1 support? :)

      Guess even if not it still could give reps more time to handle your queries if they’re not telling people to click “forgot my password” when they write in saying “hey I forgot my password”.

      • pingveno@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 months ago

        I just gave the chat bot that was put in place at the IT department where I work at a poke. It answered my question perfectly: “How do I print from my laptop to the library?” And it’s not like the chat bot is the only route for support, but it does divert a lot of routine questions from our help desk so they can focus on questions that require a human touch. That could be people where a chat bot is not a good format or it could be a non-routine question.