• 0 Posts
  • 40 Comments
Joined 1 year ago
cake
Cake day: September 25th, 2023

help-circle


  • So… FWIW I post often about I have a painless NVIDIA experience, including playing Windows only games, including VR games.

    I thought “Damn… how did I get so lucky?” and yesterday while tinkering with partitions (as one does…) I decided I’d try a “speed run” to go from no system to a VR Windows only game running on Linux.

    I started from Debian 12 600Mb ISO and ~1h later I was playing.

    I’m not saying everybody should have a perfect experience playing games on Linux with an NVIDIA but … mine was again pretty straightforward.

    I’d argue it’s easier with Ubuntu and accepting non-free repository, probably having the same result, ~1hr from 0 to play, without even using the command line once.







  • I agree but I don’t watch TV so I don’t bother. Yet… I still hate product placement so I might be interested in such a solution. Anyway here is how I would do it :

    • evaluate what exists, e.g SponsorBlock, and see what’s the closest that fit my need, try it, ask in forum or repository issues if modifications are possible
    • gather videos of the typically problematic content, say few hours to start
    • annotate them by adding the time stamps then the location on the image
    • replace problematic content with gradually complex solutions, e.g black, average color of the area, denoising (quite compute intensive)
    • honestly evaluate the result
    • consider the biggest problem, e.g here on first pass fixed content so a detector based on machine learning for the type of content could help
    • iterate, sharing my result back with the closest interested community

    Honestly it’s a worthwhile endeavor but be mindful it’s an arm race. There are a LOT of smart people paid to add ads everywhere… but there are even more people, like you and I, eager to remove them. IMHO the key trick is, like SponsorBlock, to federate the efforts.


  • Right, and I mentioned CUDA earlier as one of the reason of their success, so it’s definitely something important. Clients might be interested in e.g Google TPU, startups like Etched, Tenstorrent, Groq, Cerebras Systems or heck even design their own but are probably limited by their current stack relying on CUDA. I imagine though that if backlog do keep on existing there will be abstraction libraries, at least for the most popular ones e.g TensorFlow, JAX or PyTorch, simply because the cost of waiting is too high.

    Anyway what I meant isn’t about hardware or software but rather ROI, namely when Goldman Sachs and others issue analyst report saying that the promise itself isn’t up to par with actual usage for paying customers.




  • Stuff like LLMs or ConvNets (and the likes) can already be used to do some pretty amazing stuff that we could not do a decade ago, there is really no need to shit rainbows and puke glitter all over it.

    I’m shitting rainbows and puking glitter on a daily basis BUT it’s not against AI as a field, it’s not against AI research, rather it’s against :

    • catastrophism and fear, even eschatology, used as a marketing tactic
    • open systems and research that become close
    • trying to lock a market with legislation
    • people who use a model, especially a model they don’t even have e.g using a proprietary API, and claim they are an AI startup
    • C-levels decision that anything now must include AI
    • claims that this or that skill is soon to be replaced by AI with actually no proof of it
    • meaningless test results with grand claim like “passing the bar exam” used as marketing tactics
    • claims that it scales, it “just needs more data”, not for .1% improvement but for radical change, e.g emergent learning
    • for-profit (different from public research) scrapping datasets without paying back anything to actual creators
    • ignoring or lying about non renewable resource consumption for both training and inference
    • relying on “free” or loss leader strategies to dominate a market
    • promoting to be doing the work for the good of humanity then signing exclusive partnership with a corporation already fined for monopoly practices

    I’m sure I’m forgetting a few but basically none of those criticism are technical. None of those criticism is about the current progress made. Rather, they are about business practices.


  • Their valuation is because there’s STILL a lineup a mile long for their flagship GPUs.

    Genuinely curious, how do you know where the valuation, any valuation, come from?

    This is an interesting story, and it might be factually true, but as far as I know unless someone has actually asked the biggest investor WHY they did bet on a stock, nobody why a valuation is what it is. We might have guesses, and they might even be correct, but they also change.

    I mentioned it few times here before but my bet is yes, what you did mention BUT also because the same investors do not know where else do put their money yet and thus simply can’t jump boats. They are stuck there and it might again be become they initially though the demand was high with nobody else could fulfill it, but I believe that’s not correct anymore.


  • Unfortunately it’s part of the marketing, thanks OpenAI for that “Oh no… we can’t share GPT2, too dangerous” then… here it is. Definitely interesting then but now World shattering. Same for GPT3 … but through exclusive partnership with Microsoft, all closed, rinse and repeat for GPT4. It’s a scare tactic to lock what was initially open, both directly and closing the door behind them through regulation, at least trying to.


  • move on to the next […] eager to see what they come up with next.

    That’s a point I’m making in a lot of conversations lately : IMHO the bubble didn’t pop BECAUSE capital doesn’t know where to go next. Despite reports from big banks that there is a LOT of investment for not a lot of actual returns, people are still waiting on where to put that money next. Until there is such a place, they believe it’s still more beneficial to keep the bet on-going.



  • there isn’t a single serious project written exclusively or mostly by an LLM? There isn’t a single library or remotely original application

    IMHO “original” here is the key. Finding yet another clone of a Web framework ported from one language to another in order to push online a basic CMS slightly faster, I can imagine this. In fact I even bet that LLM, because they manipulate words in languages and that code can be safely (even thought not cheaply) tested within containers, could be an interesting solution for that.

    … but that is NOT really creating value for anyone, unless that person is technically very savvy and thus able to leverage why a framework in a language over another creates new opportunities (say safety, performances, etc). So… for somebody who is not that savvy, “just” relying on the numerous existing already existing open-source providing exactly the value they expect, there is no incentive to re-invent.

    For anything that is genuinely original, i.e something that is not a port to another architecture, a translation to another language, a slight optimization, but rather something that need just a bit of reasoning and evaluating against the value created, I’m very skeptical, even less so while pouring less resources EVEN with a radical drop in costs.