• 0 Posts
  • 620 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle




  • Disruption!

    It actually just means to undercut an existing industry with venture capital, taking on a loss until the existing competition is out-priced out of the market. Then once a monopoly is established, tear down quality service, hike up prices, shaft costumers and use the money to pay huge bonuses to the executives. If the company is still profitable afterwards then just recreate the same old industry and competitors but with an iron grip monopoly. If it is not profitable, just sell the company and distribute the dividends amongst the C suite. Rinse and repeat.




  • Human will immediately adopt anything they can carry with them. But humans have a very strong repulsion to adopting anything they have to wear or in general have permanently on them. It is uncomfortable, it is hot, it is annoying, it is visible, it is a wall between them and the world. There are people who don’t wear their correction glasses because they don’t like having something on their faces. There are people who don’t even withstand contact glasses. There are deaf people who refuse to use hearing implants. Wrist watches are tolerated because they are more peripheral and easier to remove.

    This is a way more fundamental flaw on the concept of VR than technology, applications, software availability, etc. You can make VR as tiny and practical as contact glasses and people will still refuse to adopt it.


  • There are dozens of amazing games

    …and 99% of them are tech demos.

    Compare it to an industry that publishes over 10 thousand games every year, on Steam alone. Then you start to understand how VR is just a niche hobbyist toy. Not a mainstream product. Making VR experiences is several times harder while also aiming at a minuscule tiny market. VR is perhaps today on par to where general computing and gaming was in the 70s. Neat concept, not enough use cases and product development, still way too cumbersome and expensive.




  • I won’t say exactly where I work, because it is a sensitive topic. But the best part of my job is that I get to facilitate access to help for people after they have suffered some of the worst and most horrible experiences that humans can go through. The worse part of my work, interestingly, is not having to listen to the most disheartening stories and life experiences that usually really challenge my faith in humanity. Although that is heavy on the soul and tiring on my emotions, the actually worst part of my job is that I also have to inform to a lot of people, asking for help but who don’t fit the selection criteria for the help programs, that they will not be receiving help from my organization. I do try to get them in touch with others who sometimes can help them, but in general, it is always more people being turned down than accepted. There’s too much need in the world and too little people helping, but we are here helping.


  • Turing never said anything of the sort, “this is a test for intelligence”. Intelligence and thinking are not the same. Humans have plenty of unintelligent behaviors, that has no bearing on their ability to think. And plenty of animals display intelligent behavior but that is not evidence of their ability to think. Really, if you know nothing about epistemology, just shut up, nobody likes your stupid LLMs and the marketing is tiring already, and the copyright infringement and rampant privacy violations and property theft and insatiable power hunger are not worthy.





  • Turing test isn’t actually meant to be a scientific or accurate test. It was proposed as a mental exercise to demonstrate a philosophical argument. Mainly the support for machine input-output paradigm and the blackbox construct. It wasn’t meant to say anything about humans either. To make this kind of experiments without any sort of self-awareness is just proof that epistemology is a weak topic in computer science academy.

    Specially when, from psychology, we know that there’s so much more complexity riding on such tests. Just to name one example, we know expectations alter perception. A Turing test suffers from a loaded question problem. If you prompt a person telling them they’ll talk with a human, with a computer program or announce before hand they’ll have to decide whether they’re talking with a human or not, and all possible combinations, you’ll get different results each time.

    Also, this is not the first chatbot to pass the Turing test. Technically speaking, if only one human is fooled by a chatbot to think they’re talking with a person, then they passed the Turing test. That is the extend to which the argument was originally elaborated. Anything beyond is alterations added to the central argument by the author’s self interests. But this is OpenAI, they’re all about marketing aeh fuck all about the science.