And once again, we see the real mechanism by which terrorism “wins”. Israel has hurt itself in its confusion.
Basically a deer with a human face. Despite probably being some sort of magical nature spirit, his interests are primarily in technology and politics and science fiction.
Spent many years on Reddit and is now exploring new vistas in social media.
And once again, we see the real mechanism by which terrorism “wins”. Israel has hurt itself in its confusion.
We’ve got LLMs now that can do that. Sorry, you’ve been replaced. Please gather your things into this box and cheer up.
Yeah, so many people are confidently stating “LLMs can’t think like humans do!” When we’re actually still pretty unclear on how humans think.
Sure, an LLM on its own may not be an AGI. But they’re remarkably closer than we would have predicted they could get just a few years ago, and it may well be that we just need to add a bit more “special sauce” (memory, prompting strategies, perhaps a couple of parallel LLMs that specialize in different types of reasoning) to get them over the hump. At this point a lot of the research isn’t going into simply “make it bigger!”, it’s going into “use LLMs smarter.”
The article opens:
When I first started colorizing photos back in 2015, some of the reactions I got were, well, pretty intense. I remember people sending me these long, passionate emails, accusing me of falsifying and manipulating history.
So this is hardly an AI-specific issue. It’s always been something to be on guard for. As others in this thread have pointed out, Stalin was airbrushing out political rivals from photos back in the 30s. Heck damnatio memoriae goes back as far as history itself does. Ancient Pharoahs would have the names of their predecessors chiseled off of monuments so they could “claim” them as their own work.
you know that a company putting a thing in their terms of service doesn’t make it legally binding, right?
And you know that doesn’t necessarily imply the reverse? Granting a site a license to use the stuff you post there is a pretty basic and reasonable thing to agree to in exchange for them letting you post stuff there in the first place.
hence why they all suddenly felt the need to update their terms of services
As others have been pointing out to you in this thread, that also is not a sign that the previous ToS didn’t cover this. They’re just being clearer about what they can do.
Go ahead and refrain from using their services if you don’t agree to the terms under which they’re offering those services. Nobody’s forcing you.
Here’s an October 2023 survey on the subject from the Small Business and Entrepreneurship Council on the subject of AI adoption. It found very extensive usage for a wide range of needs.
If it lets a person doing job X do twice as much work, that’s effectively replacing a person in job X. There’s now half as many of those jobs needed.
Rare-earth element is a specific technical term. Lithium is absolutely not among them.
One of the main sources lithium is extracted from is brines. That is, it’s already in the water and we take it out.
If we don’t have individual transportation how are we ever going to catch up to those goalposts?
You mean before or after all the sites updated their ToS it so that they were legally in the clear to sell user posts to AI training companies?
The ToSes would generally have a blanket permission in them to license the data to third-party companies and whatnot. I went back through historical Reddit ToS versions a little while back and that was in there from the start.
Also in there was a clause allowing them to update their ToS, so even if the blanket permission wasn’t there then it is now and you agreed to that too.
Learning from things is a very obviously a completely different process to feeding data into a server farm.
It is not very obviously different, as evidenced by the fact that it’s still being argued. There are some legal cases before the courts that will clarify this in various jurisdictions but I’m not expecting them to rule against analysis of public data.
The new jobs may come whether they “mean to” or not, though.
All that money that gets saved goes somewhere. Yes, “trickle-down” is a lie, simply feeding more money to already-rich people won’t mean much to the economy. But if AI makes it cheaper to run a company it can also make it cheaper to start and grow a company. It’s not just giant companies that will be making use of these tools.
Yeah, and as a programmer-person I’ve pondered where new programmers will come from once AIs replace all the interns.
There’s a potential solution, though. Have you ever sat down with an AI and used it as a “tutor” while learning new stuff? It no doubt varies from person to person since different people learn different ways, but I’ve found it downright incredible how easy it is to learn when I’ve got an infinitely-patient AI I can ask to have walk me through new stuff. So maybe in future lawyers and programmers and whatnot can just skip the larval stage.
I find that often “movements” end up focused more on just continuing their movement rather than the underlying purpose of why they started moving in the first place.
in the case of ai generated media, companies just decided that they just had the rights to use existing published media, so they harvested it without consent or compensation
Have you read the ToS of your favourite social media site lately?
In any event, it might well be that companies (and you yourself) have the rights to use existing published media to train AIs. Copyright doesn’t cover the analysis of public data. I suspect that people wouldn’t like it if copyright got extended to let IP owners prohibit you from learning from their stuff.
Did you read the article? It actually addresses much of what you talk about. For example:
“The promise of AI is a stake in human judgment and trying to automate some of it so that humans can focus on higher-order tasks that are much more fruitful,” he said.
The point is not to remove humans entirely. It’s to automate the stuff that can be automated so that the humans you do have can focus on the important stuff that can’t be automated. Human employees are expensive so you’ll want to use them wisely, not doing busy-work that a machine can handle.
I’ll be supporting the incoming “Not made with AI” products and businesses so hard from here on our to just take away whatever monetary resources I can from dipshits like this.
If you wish, but you’ll likely end up paying a hefty premium to do so. This is like insisting on only eating hand-churned butter or only wearing hand-stitched clothing - you can probably find niche providers that supply that but you’ve got to be pretty rich to pull that off as a lifestyle.
It’s also possible that you’ve inadvertently wandered into an asshole convention.
The abuse of power is instance-specific, fortunately. The whole point of all this is that there are multiple instances. Just ignore the ones that are run by tankies, those instances are theirs to wallow in if they want.
“Prompt engineering” is simply the skill of knowing how to correctly ask for the thing that you want. Given that this is something that is in rare supply even when interacting with other humans, I don’t see this going away until we’re well past AGI and into ASI.
You have misunderstood me. You said “Apple spent twenty years building the ecosystem Spotify and Epic want to exploit for free.” I’m pointing out that the amount of effort Apple put into building the ecosystem is immaterial to whether they’re doing illegal things with it.
Ah, it’s not even on by default.
So don’t turn it on.