edited from talent to job

    • BougieBirdie@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      24
      arrow-down
      2
      ·
      29 days ago

      CEO is usually my answer as well when people ask

      Like, honestly too. The humans running the show are outrageously expensive, cause huge ecological harm, make their decisions based on vibes with no understanding of their domain, and their purposes are inscrutable to the average worker. They’re honestly the perfect target for AI because they already behave like AI.

      I don’t think I actually want to live in a world where AI is running the show, but I’m not sure it’d be any worse than the current system of letting the most parasitic bloodsucking class of human being call the shots. Maybe we ought to try something else first.

      But make sure to tell the board of directors and shareholders how much more profitable they’d be if they didn’t have to buy golden parachutes

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      2
      ·
      28 days ago

      My greatest fear is we’ll get the robots (like, Animatrix: Second Renaissance of I, Robot general purpose robots) but before we have any sort of progressive change of revolution. That we’ll be one step from a truly carefree life.

  • spicy pancake@lemmy.zip
    link
    fedilink
    English
    arrow-up
    18
    ·
    29 days ago

    Perhaps it’s not possible to fully replace all humans in the process, but harmful content filtering seems like something where taking the burden off humans could do more good than harm if implemented correctly (big caveat, I know.)

    Here’s an article detailing a few peoples’ experience with the job and just how traumatic it was for them to be exposed to graphic and distributing content on Facebook requiring moderator intervention.

  • xylogx@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    2
    ·
    29 days ago

    The question of which jobs should be replaced by AI depends on societal values, priorities, and the potential impact on workers. Generally, jobs most suited for replacement by AI involve repetitive, high-volume tasks, or those where automation can improve safety, efficiency, or precision. Here are some categories often discussed:

    Repetitive and Routine Tasks

    • Manufacturing and assembly line work: Machines can perform repetitive tasks with greater efficiency and precision.

    • Data entry and processing: AI can automate mundane tasks like updating databases or processing forms.

    • Basic customer service: Chatbots and virtual assistants can handle frequently asked questions and routine inquiries.

    High-Risk Roles

    • Dangerous jobs in mining or construction: Robots can reduce human exposure to hazardous environments.

    • Driving in risky environments: Self-driving vehicles could improve safety for delivery drivers or long-haul truckers in hazardous conditions.

    Analytical and Predictable Roles

    • Basic accounting and bookkeeping: AI can handle invoicing, payroll, and tax calculations with high accuracy.

    • Legal document review: AI can analyze contracts and identify discrepancies more quickly than humans.

    • Radiology and diagnostics: AI is becoming adept at reading medical scans and assisting in diagnoses.

    Jobs With High Inefficiencies

    • Warehouse operations: Inventory sorting and retrieval can be automated for faster fulfillment.

    • Food service (e.g., fast food preparation): Robotic systems can prepare meals consistently and efficiently.

    • Retail checkout: Self-checkout systems and AI-powered kiosks can streamline purchases.

    Considerations for Replacement

    1. Human Impact: Automation should ideally target roles where job transitions can be supported with retraining and upskilling.

    2. Creativity and Emotional Intelligence: Jobs requiring complex human interaction, creativity, or emotional intelligence (e.g., teaching, counseling) are less suitable for AI replacement.

    3. Ethical Concerns: Some jobs, like judges or certain healthcare roles, involve moral decision-making where human judgment is irreplaceable.

    Instead of framing it as total “replacement,” many advocate for AI to augment human workers, enabling them to focus on higher-value tasks while reducing drudgery.

    Generated by ChatGPT

    • Artyom@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      29 days ago

      Replacing politicians with AIs actually sounds really cool. Instead of voting, you write an essay on the things you value. An AI reads all their voting base’s essays and votes in a way that predominantly aligns with their voter’s ideals. This isn’t direct or indirect democracy, it’s a totally new approach driven by mathematical averages.

      Politicians shouldn’t negotiate to get something passed. If the senate of AIs doesn’t like it, it’s unpassable, you just have to write a new bill. No tit for tat, no lobbying, no friends protecting friends. The only people in politics are the ones who write bills, and they can check to see if their bill would pass in a few minutes on a server, and that will be the actual vote because voting is reproducible behavior, then they’ll decide if they have to revise it.

    • Numuruzero@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      28 days ago

      I get what you’re going for but I have a hard time imagining this as a good thing so long as companies are profit driven.

  • leaky_shower_thought@feddit.nl
    link
    fedilink
    arrow-up
    3
    ·
    29 days ago

    ai as in AI: aircraft auto-landing and pitch levelling. near-boundary ship navigation. train/ freight logistics. protein folding. gene mapping.

    ai as in LLM/ PISS: hmmm… downlevel legalese to collegiate-, 6th-grade-, or even street-level prose. do funny abridged shorts. imo, training-wheels to some shakespearean writing is appreciated.

  • AA5B@lemmy.world
    link
    fedilink
    arrow-up
    3
    ·
    edit-2
    29 days ago

    None. The current ones with internet content, reporting, and call centers are already making things worse. Just no.

    It can definitely be a useful tool though, as long as you understand its limitations. My kids school had them feed an outline to ChatGPT and correct the result. Excellent

    • consultants generate lots of reports that ai can help with
    • I find ai useful to summarize chat threads that are lower priority
    • a buddy of mine uses it as a first draft to summarize his teams statuses
    • I’m torn on code solutions. Sometimes it’s really nice but you can’t forward a link. More importantly the people who need it most are least likely to notice where it hallucinates. Boilerplate works a little better
  • Kaboom@reddthat.com
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    29 days ago

    None. Maybe some middle management, but even then, until AI fixes the hallucinations for good, in useless

  • s08nlql9@lemm.ee
    link
    fedilink
    arrow-up
    2
    ·
    29 days ago

    i think i read some posts like hackernews that they already use AI as a therapist. I have good conversations with chatgpt when i asked for some personal advise. I haven’t tried talking to a real therapist yet but i can see AI being used for this purpose. The services may still be provided by big companies or we can host it ourselves but it could be cheaper (hopefully) compared to paying a real person.

    Don’t get me wrong, i’m not against real physicians in this field, but some people just can’t afford mental healthcare when they need it.