Companies are going all-in on artificial intelligence right now, investing millions or even billions into the area while slapping the AI initialism on their products, even when doing so seems strange and pointless.
Heavy investment and increasingly powerful hardware tend to mean more expensive products. To discover if people would be willing to pay extra for hardware with AI capabilities, the question was asked on the TechPowerUp forums.
The results show that over 22,000 people, a massive 84% of the overall vote, said no, they would not pay more. More than 2,200 participants said they didn’t know, while just under 2,000 voters said yes.
That’s kind of abstract. Like, nobody pays purely for hardware. They pay for the ability to run software.
The real question is, would you pay $N to run software package X?
Like, go back to 2000. If I say “would you pay $N for a parallel matrix math processing card”, most people are going to say “no”. If I say “would you pay $N to play Quake 2 at resolution X and fps Y and with nice smooth textures,” then it’s another story.
Yup, the answer is going to change real fast when the next Oblivion with NPCs you can talk to needs this kind of hardware to run.
I’m still not sold that dynamic text generation is going to be the major near-term application for LLMs, much less in games. Like, don’t get me wrong, it’s impressive what they’ve done. But I’ve also found it to be the least-practically-useful of the LLM model categories. Like, you can make real, honest-to-God solid usable graphics with Stable Diffusion. You can do pretty impressive speech generation in TortoiseTTS. I imagine that someone will make a locally-runnable music LLM model and software at some point if they haven’t yet; I’m pretty impressed with what the online services do there. I think that there are a lot of neat applications for image recognition; the other day I wanted to identify a tree and seedpod. Someone hasn’t built software to do that yet (that I’m aware of), but I’m sure that they will; the ability to map images back to text is pretty impressive. I’m also amazed by the AI image upscaling that Stable Diffusion can do, and I suspect that there’s still room for a lot of improvement there, as that’s not the main goal of Stable Diffusion. And once someone has done a good job of building a bunch of annotated 3d models, I think that there’s a whole new world of 3d.
I will bet that before we see that becoming the norm in games, we’ll see either pre-generated speech synth or in-game speech synthesis, so that the characters say text (which might be procedurally-generated, aren’t just static pre-recorded samples, but aren’t necessarily generated from an LLM). Like, it’s not practical to have a human voice actor cover all possible phrases with static recorded speech that one might want an in-game character to speak.
I think it’s coming pretty fast. There’s already a mod for Skyrim that lets you talk to your companion. People are spending hours talking to llms and roleplaying, the first triple A game to incorporate it is going to bee a massive hit imo. I’m actually surprised no one’s been coming out with visual novels using them, it seems like a perfect use case.
It’s definitely going to be used first for making the content of the game like you said though.
there are some local genai music models, although I don’t know how good they are yet as I haven’t tried any myself (stable audio is one, but I’m sure there are others)
also minor linguistic nitpick but LLM stands for ‘language model’ (you could maybe get away with it for pixart and sd3 as they use t5 for prompt encoding, which is an llm, i’m sure some audio models with lyrics use them too), the term you’re looking for is probably ‘generative’
They want you to buy the hardware and pay for the additional energy costs so they can deliver clippy 2.0, the watching-you-wank-edition.
If you unbend him, clippy could be very useful 🍆📎
Well, NPU are not in pair with modern GPU. General GPU has more power than most NPUs, but when you look at what electricity cost, you see that NPU are way more efficient with AI tasks (which are not only chatbots).
I would pay for a power efficient AI expansion card. So I can self host AI services easily without needing a 3000€ gpu that consumes 10 times more than the rest of my pc.
I would consider it a reason to upgrade my phone a year earlier than otherwise. I don’t know what ai will stick as useful, but most likely I’ll use it from my phone, and I want there to be at least a chance of on-device ai rather than “all your data are belong to us” ai
this goes to show just how far the current grift has gone.
AI enhanced hardware? Jesus Fuck take all my money that’s amazing.
Dedicated LLM chatbot hardware? Die in a fire for even suggesting this is AI.
Remember when the IoT was very new? There were similar grumblings of “Why would I want talk to my refridgerator?” And now more and more things are just IoT connected for no reason.
I suspect AI will follow as similar path into the consumer mainstream.
IoT became very valuable, for them at least, as data collecting devices.
Can’t help but think of it as a scheme to steal the consumers’ compute time and offload AI training to their hardware…
People don’t want the hardware if the software sucks.
Why would I need a GPU if the only games that exist to play on it are the equivalent of WildTangent malware games?
If AI matures into something that people actually like, you’ll get a different answer here.
There’s really no point unless you work in specific fields that benefit from AI.
Meanwhile every large corpo tries to shove AI into every possible place they can. They’d introduce ChatGPT to your toilet seat if they could
Someone did a demo recently of AI acceleration for 3d upscaling (think DLSS/AMDs equivilent) and it showed a nice boost in performance. It could be useful in the future.
I think it’s kind of a ray tracing. We don’t have a real use for it now, but eventually someone will figure out something that it’s actually good for and use it.
AI acceleration for 3d upscaling
Isn’t that not only similar to, but exactly what DLSS already is? A neural network that upscales games?
But instead of relying on the GPU to power it the dedicated AI chip did the work. Like it had it’s own distinct chip on the graphics card that would handle the upscaling.
I forget who demoed it, and searching for anything related to “AI” and “upscaling” gets buried with just what they’re already doing.
That’s already the nvidia approach, upscaling runs on the tensor cores.
And no it’s not something magical it’s just matrix math. AI workloads are lots of convolutions on gigantic, low-precision, floating point matrices. Low-precision because neural networks are robust against random perturbation and more rounding is exactly that, random perturbations, there’s no point in spending electricity and heat on high precision if it doesn’t make the output any better.
The kicker? Those tensor cores are less complicated than ordinary GPU cores. For general-purpose hardware and that also includes consumer-grade GPUs it’s way more sensible to make sure the ALUs can deal with 8-bit floats and leave everything else the same. That stuff is going to be standard by the next generation of even potatoes: Every SoC with an included GPU has enough oomph to sensibly run reasonable inference loads. And with “reasonable” I mean actually quite big, as far as I’m aware e.g. firefox’s inbuilt translation runs on the CPU, the models are small enough.
Nvidia OTOH is very much in the market for AI accelerators and figured it could corner the upscaling market and sell another new generation of cards by making their software rely on those cores even though it could run on the other cores. As AMD demonstrated, their stuff also runs on nvidia hardware.
What’s actually special sauce in that area are the RT cores, that is, accelerators for ray casting though BSP trees. That’s indeed specialised hardware but those things are nowhere near fast enough to compute enough rays for even remotely tolerable outputs which is where all that upscaling/denoising comes into play.
Nvidia’s tensor cores are inside the GPU, this was outside the GPU, but on the same card (the PCB looked like an abomination). If I remember right in total it used slightly less power, but performed about 30% faster than normal DLSS.
Found it.
I can’t find a picture of the PCB though, that might have been a leak pre reveal and now that it’s revealed good luck finding it.
Having to send full frames off of the GPU for extra processing has got to come with some extra latency/problems compared to just doing it actually on the gpu… and I’d be shocked if they have motion vectors and other engine stuff that DLSS has that would require the games to be specifically modified for this adaptation. IDK, but I don’t think we have enough details about this to really judge whether its useful or not, although I’m leaning on the side of ‘not’ for this particular implementation. They never showed any actual comparisons to dlss either.
As a side note, I found this other article on the same topic where they obviously didn’t know what they were talking about and mixed up frame rates and power consumption, its very entertaining to read
The NPU was able to lower the frame rate in Cyberpunk from 263.2 to 205.3, saving 22% on power consumption, and probably making fan noise less noticeable. In Final Fantasy, frame rates dropped from 338.6 to 262.9, resulting in a power saving of 22.4% according to PowerColor’s display. Power consumption also dropped considerably, as it shows Final Fantasy consuming 338W without the NPU, and 261W with it enabled.
I’ve been trying to find some better/original sources [1] [2] [3] and from what I can gather it’s even worse. It’s not even an upscaler of any kind, it apparently uses an NPU just to control clocks and fan speeds to reduce power draw, dropping FPS by ~10% in the process.
So yeah, I’m not really sure why they needed an NPU to figure out that running a GPU at its limit has always been wildly inefficient. Outside of getting that investor money of course.
“Shits are frequently classified into three basic types…” and then gives 5 paragraphs of bland guff
With how much scraping of reddit they do, there’s no way it doesn’t try ordering a poop knife off of Amazon for you.
It’s seven types, actually, and it’s called the Bristol scale, after the Bristol Royal Infirmary where it was developed.
Which would be approptiate, because with AI, theres nothing but shit in it.
One of our helpdesk told me about his amazing idea for our software the other day.
“We should integrate AI into it…”
“Right? And have it do what?”
“Uh, I don’t know”
This from the same man who came up with an idea for orange juice pumped directly into your home, and you pay with crypto.
And the scary thing is, I can imaging these things coming out of the mouths of people in actual positions of power, where laughing at them might actually get people fired…
This fucker seriously proposed brawndo on tap. Except perishable. Jesus fucking Christ.
who came up with an idea for orange juice pumped directly into your home
That maybe not as cool, but pneumatic city-wide mail system would be cool. Too expensive and hard to maintain, not even talking about pests and bacteria which would live there, but imagine ordering a milkshake with some fries and in 10 minutes hearing “thump”, opening that little door in the wall of your apartment and seeing a package there (it’ll be a mess inside though).
orange juice pumped directly into your home, and you pay with crypto
[Furiously taking notes]
30% of people will believe literally anything. 16% means even half of the deranged people aren’t interested.
Interested or not, more hardware is going to be “AI-enhanced” and believe it or not, it’s going to cost more.
This is our future.
84% said no.
16% punched the person asking them for suggesting such a practice. So they also said no. With their fist.
I agree that we shouldn’t jump immediately to AI-enhancing it all. However, this survey is riddled with problems, from selection bias to external validity. Heck, even internal validity is a problem here! How does the survey account for social desirability bias, sunk cost fallacy, and anchoring bias? I’m so sorry if this sounds brutal or unfair, but I just hope to see less validity threats. I think I’d be less frustrated if the title could be something like “TechPowerUp survey shows 84% of 22,000 respondents don’t want AI-enhanced hardware”.
No, but I would pay good money for a freely programmable FPGA coprocessor.
If the AI chip is implemented as one, and is useful for other things I’m sold.
I would pay extra to make sure that there is no AI anywhere near my hardware.
I have no clue why any anybody thought I would pay more for hardware if it goes with some stupid trend that will be blow up in our faces soon or later.
I don’t get they AI hype, I see a lot of companies very excited, but I don’t believe it can deliver even 30% of what people seem to think.
So no, definitely not paying extra. If I can, I will buy stuff without AI bullshit. And if I cannot, I will simply not upgrade for a couple of years since my current hardware is fine.
In a couple of years either the bubble is going to burst, or they really have put in the work to make AI do the things they claim it will.