Suicide is more like not turning up to the exam and taking it later.
Suicide is more like not turning up to the exam and taking it later.
Probably a good starting place would be to take the three apps you need most, and just search the web for guides to running them on Linux. That’ll give you an indication of how much work you might/not be in for.
e: also if a guide says “just run this shell script” even chance it’s not just that simple.
For free? Surely there’s some way to get that shit out… Or at least, the panel.
Yeah, that’s why it’s the chaotic neutral solution… We would need to organise food, water, comms and energy supplies almost immediately. We’d to have to work together, and that is chaotic.
I think a fun way to protest would be to stop the economy. Just all simultaneously stop working and spending, stay home and spend time with our families.
If they’re games, protondb (.com) will tell you how well you can expect them to run. Other stuff, it’s often a case of search the web or try and see. Wine takes some getting used to, you’ll probably have to get your hands dirty and do a little learning.
Hopefully the people elon’s persuading to do this work are better aligned.
It really depends on what kind of applications you’re talking about. There are still a number of things it can’t run (or well, probably without a lot of meddling around to get there) in the professional space, like CAD. Hopefully this will change over time.
For a lot of these products there are free alternatives available, but they often don’t cut the mustard and/or aren’t worth retraining for.
Another thing you should consider before choosing Linux is hardware support. This is often lacking in Linux. For example, your fancy tablet might work fine as a tablet, but if you want to configure anything about it you might need windows depending on the device.
The good news is, you can try it without worrying about harming your windows install by doing it say on a usb stick or hdd. It’ll only cost you time and effort.
Community support is a thing, it’s not the lack of support that’s to blame here - have you ever used Microsoft support? Linux support is much more accessible even.
A lot of the blame here, is Microsoft’s clever marketing campaign providing windows to educational institutions - with support - for far below cost, in the early days when pc adoption was on the rise.
Distribution saturation is a barrier to entry and focused support, and it is sometimes more complicated to install and repair. Sometimes it’s easier to repair, because windows is too busy trying to hide its internals from you.
It’s usually easier to support a remote IT-illiterate person using Linux, by comparison to windows, today.
e: I guess to be fair, if you factored in community support for windows, your options open up quite a lot. I was more thinking about my own interactions with their support. But enterprise support/problems are not the same as personal ones.
Let’s not argue about the potential of “any human-machine interface”, because nobody knows how far that can go. We have an idea, but there’s still way too much we don’t understand.
You’re right, humans never have and never will alone. It’s a long shot, and as I said is pretty unlikely because the models will just get better at compensating. But I imagine if people were interacting with llms regularly - vocally - they would soon get tired of extended conversations to get what they want, and repeat training in forming those questions to an llm would maybe in turn reflect in their human interactions.
I’m going to take the time to illustrate here, how I can see LLMs affecting human speech through existing applications and technologies that are (or could) be made both available and popular enough to achieve this. We’re far enough down the comment chain I can reply to myself now right?
So, we can all agree that people are increasingly using LLMs in the form of chatgpt and the like, to acquire knowledge/information. The same way as they would use a search engine to follow a link to that knowledge.
Speech-to-text has been a thing for at least 3 decades (yeah it was pretty hopeless once, but not so much now). So let’s not argue about speech vs text. People already talk to Google and siri and whoever else to this end, llms. Pale have their responses read out via tts.
I remember being blown away watching a blind sysadmin interacting with a Linux shell via tts at rates I couldn’t even understand the words in 1998. How far we’ve come. I digress, so.
We’ve all experienced trouble getting the information we’re looking for even with all these tools. Because there’s so much information, and it can be very difficult to find the needle in the haystack. So we constantly have to refine our queries either to be more specific, or exclude relationships to other information.
This in turn, causes us to think about the words we were using to get the results we want, more frequently because otherwise we spend too much time on recursion.
In turn, the more we do this, and are trained to do this, the more it will bleed into human communication.
Now look, there is absolutely a lot of hopium smoking going on here, but damn, this could have everlasting impact on verbal communication. If technology can train people - through inaccurate/incorrect results to think about the communication going out when they speak, we could drastically reduce the amount of miscommunication between people by that alone.
Imagine:
get me a chair
wheels out an office chair from the study
no I meant a chair for at the kitchen table
Vs
get me a chair for at the kitchen table
You can apply the same thing to human prompted image generation and video generation.
Now… We don’t need llms to do this, or know this. But we are never going to achieve this without a third party - the “llm”, and whatever it’s plugged into - because the human recipient will usually be more capable of translating these variances, or employ other contexts not as accessible via a single output as speech or text.
But if machines train us to communicate out better (more accurately, precisely and/or concisely), that is an effect I can’t welcome enough.
Realistically, the machines will learn to deal with us being dumb, before we adapt.
e: formatting.
This is interesting and thought provoking discussion, ty.
You’re absolutely right, I was looking for the dead end - plugging LLM into a solution.
I’m more thinking LLMs used in conjunction with other tech will have these effects on our communicating. LLMs, or whatever replaces them to do that interpretation, are necessary to facilitate that.
When we come up with something better, to do the same job better, then of course, LLMs will be redundant. If that happens, great.
We are already seeing a boom in popularity of LLMs outside of professional use. Global ubiquity for anything is never going to happen, unless we can fix communication, which we probably can’t. We certainly can’t alone. It’s very much a chicken an egg problem, that we can only gain from by progressing towards.
Imagining vocallising using programming languages gave me a chuckle. I have been known to do things like use s/x/y/ to correct in written chats though.
Programming languages allow us to talk to and listen to machines. LLMs will hopefully allow machines to listen and talk to/between us.
But to go back to Ops original question, how will LLMs affect spoken language, they won’t.
That’s a rather closed minded conclusion. It makes it sound like you don’t think they have the chance.
LLMs have the potential to pave the way to aligning spoken language, perhaps even evolving human communication to a point where speech is an occasional thing because it’s really inefficient.
So I feel like we agree here. LLMs are a step to solving a low level human problem, i just don’t see that as a dead end… If we don’t take the steps, we’re still in the oceans. We’re also learning a lot in the process ourselves, and that experience will carry on.
I appreciate your analogy, I am well aware LLMs are just clever recursive conditional queries with big semi self-updating datasets.
Regardless of whether or not something replaces LLMs in the future, the data and processing that’s gone into that data, will likely be used along with the lessons were learning now. I think they’re a solid investment from any angle.
Do you actually believe this?
LLMs are the opposite of a dead end. More like the opening of a pipe. It’s not that they will burn out, it’s just that they’ll reach a point that they’re just one function of a more complete AI perhaps.
At the very least they tackle a very difficult problem, of communication between human and machine. Their purpose is that. We have to tell machines what to do, when to do it, and how to do it. With such precision that there is no room for error. LLMs are not tools to prove truth, or anything.
If you ask an LLM a question, and it gives you a response that indicates it has understood your question correctly, and you are able to understand its response that far, then the LLM has done it’s job, regardless of if the answer is correct.
Validating the facts of the response is another function again, which would employ LLMs as a translation tool.
It’s not a long leap from there to a language translation tool between humans, where an AI is an interpreter. deepl on roids.
I don’t need steam to install your app on my pc, unless you choose it to be that way.
Capitalists care about their own capital, not future society generations from now.
Can’t we sensationalise the same way they would…
Capitalists care about their own capital, not their children.
How did that work out? We used sles in the past (moved to rhel6). Management of larger environments has been easier with rhel, but we’ve slowly been decoupling from redhat-isms. Satellite is just doing drm -the only thing that gives us grief- and repos now.
Have you tried writing to them? This helped my partner and I. Tell them how you feel, your worries, what you want and why. Give them as much time as they need to process it and respond.