• 3 Posts
  • 229 Comments
Joined 1 year ago
cake
Cake day: July 13th, 2023

help-circle

  • All of big tech is really worried about this.

    • Apple is worried about its own science output, with many of their office heavily employing data scientists. A lot of people slate Siri, but Apple’s scientists put out a lot of solid research.
    • Amazon is plugging GenAI into practically everything to appease their execs, because it’s the only way to get funding. Moonshot ideas are dead, and all that remains is layoffs, PIP, and pumping AI into shit where it doesn’t belong to make shareholders happy. The innovation died, and AI replaced it.
    • Google has let AI divisions take over both search and big parts of ads. Both are reporting worse experiences for users, but don’t worry, any engineer worth anything was laid off and there are no opportunities in other divisions for you either. If there are, they probably got offshored…
    • Meta is struggling a lot less, probably because they were smart enough to lay off in one go, but they’re still plugging AI shite in places no one asked for it, with many divisions now severely down in headcount.

    If the AI boom is a dud, I can see many of these companies reducing their output further. If someone comes along and competes in their primary offering, there’s a real concern that they’ll lose ground in ways that were unthinkable mere years ago. Someone could legitimately challenge Google on search right now, and someone could build a cheap shop that doesn’t sell Chinese tat and uses local suppliers to compete with Amazon. Tech really shat the bed during the last economic downturn.


  • I remember joining the industry and switching our company over to full Continuous Integration and Deployment. Instead of uploading DLL’s directly to prod via FTP, we could verify each build, deploy to each environment, run some service tests to see if pages were loading, all the way up to prod - with rollback. I showed my manager, and he shrugged. He didn’t see the benefit of this happening when, in his eyes, all he needed to do was drag and drop, and load the page to make sure all is fine.

    Unsurprisingly, I found out that this is how he builds websites to this day…


  • I work in AI as a software engineer. Many of my peers have PhD’s, and have sunk a lot of research into their field. I know probably more than the average techie, but in the grand scheme of things I know fuck all. Hell, if you were to ask the scientists I work with if they “know AI” they’ll probably just say “yeah, a little”.

    Working in AI has exposed me to so much bullshit, whether it’s job offers for obvious scams that’ll never work, or for “visionaries” that work for consultancies that know as little about AI as the next person, but market themselves as AI experts. One guy had the fucking cheek to send me a message on LinkedIn to say “I see you work in AI, I’m hosting a webinar, maybe you’ll learn something”.

    Don’t get me wrong, there’s a lot of cool stuff out there, and some companies are doing some legitimately cool stuff, but the actual use-cases for these tools where they won’t just be productivity enhancers/tools is low at best. I fully support this guy’s efforts to piledrive people, and will gladly lend him my sword.






  • It’s unpopular, but I don’t think the breed should be banned.

    What should happen is strict licensing laws should be established to ensure that the person in charge of their care follows set instructions, such as:

    • Regular obedience check ups to ensure that the dog poses no danger.
    • Visits to ensure that the dog is in a suitable household - no kids, people that can take care of it, etc.
    • Ensure that the owner is aware of the risks, and can take legal responsibility of owning a XL Bully.

    I’ve known people with XL Bullies, and those dogs appeared to me to be sweet dogs that loved playing, sleeping, and being a part of their family. IMO, all dogs are dangerous in their own way, because ultimately you have no idea of their mental state, and you should treat pets accordingly instead of projecting human attitudes/emotions to them.









  • I work in AI.

    We’ve known this about LLM’s for many years. One of the reasons they weren’t widely used was due to hallucinations, where they’ll be coerced into saying something confidently incorrect. OpenAI created a great set of tools that showed true utility for LLM’s, and people were able to largely accept that even if it’s wrong, it’s good for basic tasks like writing a doc outline or filling in boilerplate in scripts.

    Sadly, grifters have decided that LLM’s were the future, and they’ve put them into applications where they have no more benefit than other, compositional models. While they’re great at orchestration, they’re just not suited to search, answering broad questions with limited knowledge, or voice-based search - all areas they’ll be launched in. This doesn’t even scratch the surface of a LLM being used for critical subjects that require knowledge of health or the law, because those companies that decided that AI will build software for them, or run HR departments are going to be totally fucked when a big mistake happens.

    It’s an arms race that no one wants, and one that arguably hasn’t created anything worthwhile yet, outside of a wildly expensive tool that will save you some time. What’s even sadder is that I bet you could go to any of these big tech companies and ask IC’s if this is a good use of their time and they’ll say no. Tens of thousands of jobs were lost, and many worthwhile projects were scrapped so some billionaire cunts could enter an AI pissing contest.