• 0 Posts
  • 44 Comments
Joined 2 年前
cake
Cake day: 2023年9月27日

help-circle





  • (because it was trained on real people who write with those quirks)

    Yes and no. Generally speaking, ML-Models are pulling towards the average and away from the extremes, meanwhile most people have weird quirks when they write. (For example my overuse of (), too many , instead of . and probably a few other things I’m unaware of)

    To make a completely different example, if you average the facial features of humans in a large group (size, position, orientation, etc. of everything) you get a conventionally very attractive person. But very, very few people are actually close to that ideal. This is because the average person, meaning a random person, has a few features that stray far from this ideal. Just by the sheer number of features, there’s a high chance some will end up out of bounds.

    A ML-Model will generally be punished during training for creating anything that contains such extremes, so the very human thing of being eccentric in any regards is trained away. If you’ve ever seen people generate anime-waifus with modern generative models you know exactly what I mean. Some methods can and are being deployed to try and keep/bring back those eccentricities, at least when asked for.

    On top of that, modern LLM chatbots have reinforcement learning part, where they learn how to write so that readers will enjoy reading it, which is no longer copying but instead “inventing” in a more trial-and-error style. Think of the videos on youtube you’ve seen of “AI learns to play x game”, where no training material of someone actually playing the game was used and the model still learned. I’m assuming that’s where the overuse of em-dash and quippy one liners come from. They were probably liked by either the human testers or the automated judges trained on the human feedback used in that process.




  • Different person here.

    For me the big disqualifying factor is that LLMs don’t have any mutable state.

    We humans have a part of our brain that can change our state from one to another as a reaction to input (through hormones, memories, etc). Some of those state changes are reversible, others aren’t. Some can be done consciously, some can be influenced consciously, some are entirely subconscious. This is also true for most animals we have observed. We can change their states through various means. In my opinion, this is a prerequisite in order to feel anything.

    Once we use models with bits dedicated to such functionality, it’ll become a lot harder for me personally to argue against them having “feelings”, especially because in my worldview, continuity is not a prerequisite, and instead mostly an illusion.


  • His Hyprland setup looks cool if you’re into that sorta thing but it’s just not what users just switching to mint, fedora, whatever might be looking for.

    I would not underestimate how much of a draw “it looks cool” can have on people who are not tech savy at all. If you think about what drives new phone purchases, their major version upgrades always include lots of things that are nothing but eye-candy and those are often heavily featured in their promotion material.

    If the goal is to get casual users to convert to Linux, I would argue that aesthetics is a lot more important than ANY talk about technical details, privacy, etc. If those users cared about those things, they would’ve switched already.

    Now my bigger worry is that those users will bounce off before they manage to get their setup to look as (subjectively) cool as his.






  • Sure. You have to solve it from inside out:

    • not()…See comment below for this one, I was tricked is a base function that negates what’s inside (turning True to False and vice versa) giving it no parameter returns “True” (because no parameter counts as False)
    • str(x) turns x into a string, in this case it turns the boolean True into the text string ‘True’
    • min(x) returns the minimal element of an iterable. In this case the character ‘T’ because capital letters come before non-capital letters, otherwise it would return ‘e’ (I’m not entirely sure if it uses unicode, ascii or something else to compare characters, but usually capitals have a lower value than non-capitals and otherwise in alphabetical order ascending)
    • ord(x) returns the unicode number of x, in this case turning ‘T’ into the integer 84
    • range(x) creates an iterable from 0 to x (non-inclusive), in this case you can think of it as the list [0, 1, 2, …82, 83] (it’s technically an object of type range but details…)
    • sum(x) sums up all elements of a list, summing all numbers between 0 and 84 (non-inclusive) is 3486
    • chr(x) is the inverse of ord(x) and returns the character at position x, which, you guessed it, is ‘ඞ’ at position 3486.

    The huge coincidental part is that ඞ lies at a position that can be reached by a cumulative sum of integers between 0 and a given integer. From there on it’s only a question of finding a way to feed that integer into chr(sum(range(x)))


  • It’s a bit of a weird article indeed. I also fully disagree with the author’s notion that reading a VN is faster than watching the anime equivalent. And I say this as someone who enjoys reading VNs every now and then. If I wanted to optimize the story/spent minute ratio I would just watch an anime or even better, read a manga.

    Overall I hope the article can push a few people who were on the fence to the try reading a VN to finally do so. I doubt it’ll affect anyone who was not very interested already. Likewise it also won’t do much for people who already took the plunge.

    Also I think DDLC isn’t a good starter VN at all. All the meta stuff will be lost on the reader and the time before the twist is a real slog that might easily turn people off VNs forever. I would argue the only reason it worked is because streamers had their chats pressure them into continuing and “normal people” had friends who told them the same.