• 0 Posts
  • 24 Comments
Joined 1 year ago
cake
Cake day: July 3rd, 2023

help-circle
  • jaycifer@kbin.socialtoGaming@lemmy.worldHelp
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    I think it would be tough to nail down one thing. There are the clear comparisons to Victoria 2, which I haven’t played, but my understanding is that 2 is more “detailed” in it’s simulation of some things. There will always be people who don’t like changes from the last game. The military aspect is a lot less engaging than something like Hearts of Iron, but I think the intent there was to keep the focus on the economic and political sides of things. Warfare received a minor overhaul when I first tried the game that I’ve heard made things better, but it can still be a little frustrating at times.

    Most of the complaints about the economic side that’s meant to take center stage is that your economy’s success boils down to how many construction points you can have going at once. That’s true, but I do like that you can’t pour everything into that without balancing the foundation needed to support the increase of construction, and just doing that could limit growth in other areas, like improving citizen lives, which could complicate your political affairs.

    I feel like I’ve gotten a little lost in the weeds here. Overall, I think it has mixed reviews because Victoria 3 is still a work in progress. It’s a work in progress that I enjoy very much, but there is still room for improvement. I kind of fell off Stellaris between the Nemesis and Overlord expansions because it felt kind of bloated and repetitive, and I wasn’t wondering what kind of civilization I could play anymore. Victoria 3 has been successful at making me contemplate how I can manipulate the mechanics to achieve a specific outcome, even when I’m not playing.


  • jaycifer@kbin.socialtoGaming@lemmy.worldHelp
    link
    fedilink
    arrow-up
    2
    ·
    4 months ago

    With menu games like Paradox make, you gotta learn by playing the game. And by playing the game, I of course mean pausing the game every minute or two to spend way more minutes reading the tooltips, the tooltips within those tooltips, and then finding your way to a new menu you didn’t know existed referenced by those tooltips so you can read more tooltips!

    It’s a beautiful cycle, and Victoria 3 has sucked me in as much as Stellaris did 7 years ago. If you have any questions or thoughts, I’d love to hear them!


  • jaycifer@kbin.socialtoGaming@lemmy.worldHelp
    link
    fedilink
    arrow-up
    1
    ·
    4 months ago

    With menu games like Paradox make, you gotta learn by playing the game. And by playing the game, I of course mean pausing the game every minute or two to spend way more minutes reading the tooltips, the tooltips within those tooltips, and then finding your way to a new menu you didn’t know existed referenced by those tooltips so you can read more tooltips!

    It’s a beautiful cycle, and Victoria 3 has sucked me in as much as Stellaris did 7 years ago. If you have any questions or thoughts, I’d love to hear them!



  • What they didn’t mention is that Baldur’s Gate is a Dungeons and Dragons franchise. DnD is magnitudes more popular than it was when BG2 released, to the point of being at worst nearly mainstream. What has sold people on BG3 is being able to play their tabletop game in video game form.

    I do think Larian’s pedigree and the Baldur’s Gate name were contributors to its success, but if there was one driving factor it’s the brand recognition of DnD with the marketing of an AA to AAA game.







  • Did you start with the arithmetic that putting one apple in the bag followed by another would leas there being so many, or did you consistently observe that doing so led to there being two apples until your mind learned the math of 1+1=2?

    I think this really comes down to your opinion on whether math was created or discovered. Based on your statements so far I’m guessing you believe math was discovered, as there is some mathematical model completely representative of reality. Through observation we can discover mathematic principles to get closer and closer to that model, not that it would necessarily be 100% achieved. I realize that may be putting words in your mouth, but it’s the best argument I can think of to reach your perspective. Is that about right?


  • I think the difference here is between your conception that reality follows a mathematical model while their conception is that mathematical models follow and try to be reflective of reality.

    I think their concern is that, if one believes reality follows math, when the model fails to accurately predict something, the person with the model may wonder what’s wrong with reality. If that person believed the model follows reality they would wonder what’s wrong with the model. The latter perspective will yield better results.

    It’s the difference between saying “this is how it works” vs “to the best of my knowledge this is how it works.”



  • I remember playing Assassins Creed II on pc with a 9500GT and getting sub 20fps constantly to the point I had to wait for character animations to catch up with the dialogue so the next person could talk. Halfway through the game I upgraded to a GTX 560 and was astounded that everything was in sync and oh so smooth. I always remember that when I start getting annoyed I can’t get over 90fps in a game. As long as it’s playable!


  • For me it depends on the game. A menu game from Paradox like Crusader Kings? 4k 60fps. A competitive shooter? Ideally the max resolution (for greater pinpoint accuracy) and 144fps, but between the two I’d want maximum fps for the reaction speed and responsiveness. A pretty game like Ori and the Will of the Wisps? Crank the graphics up and I’m happy with 60fps.


  • That article states people can perceive images as rapidly as once every 13 milliseconds, which they math out to 75 fps, 25% higher than 60.

    Looking at the study itself, they were testing whether participants could pick out a picture that displayed for 13-80 ms when “masked” by other brief pictures, with a focus on whether it made a difference if the participant was told what image they were looking for before or after seeing the images. What they found was that participants could pick out the image as low as the 13 ms mark (albeit with less accuracy) and could generally do so better if told what to look for beforehand.

    What this tells me is that your source has nothing to say about anything over 75 fps. It also was testing in a fundamentally different environment than a video game, where your brain will constantly expect an image similar to and stemming from the image before it rather than seeing a completely different image. If you were to draw conclusions based on the study despite the differences, what the study would suggest is that knowing what to look for, as your brain does gaming, would make you better able to pick out individual frames. This makes me want to think that your source does not support your assertion, and that in a game you could perceive frame rates higher than 75 fps at a minimum.

    From my own knowledge, there’s also a fundamental difference between perceiving reality and computer screens in the form of motion blur. Objects moving in real time will leave a faint blur behind when perceiving it that your brain can use to fill in any blanks it may have missed, making reality appear smoother than it is. For an example of this wobble a pencil back and forth to make it “bend.” Movies filmed at 24 fps capture this minute motion blur as they film which makes it easier for our brains to watch them despite the lower frame rate. Real time rendered video games do not have this effect, as there are no after images to fill in the blanks (unless you turn on motion blur, which doesn’t do a good job emulating this).

    This means video games need to compensate, and the best way to do that is more frames per second so your brain doesn’t need to fill in the blanks with the motion blur it’s used to seeing in the real world. You’ll obviously get diminishing returns from the same increase, but there will still be returns.





  • If the author no longer has passion for his OSS project, and isn’t being paid for it, why is he still working on it? Why should he feel responsible for companies building their processes on a free piece of software without guaranteed support? Why the heck is he sacrificing sleep for something he claims not to care about anymore? It sounds to me like he’s not living his values.

    If compensation for volunteer work is mandated, it becomes less volunteer work and more of a part(or in some cases full)-time job. My understanding is that a core pillar of open source software is that anyone can contribute to it, which should make it easier for contributors to come and go. Based on the graph shown it would take more than a full-time job worth of money to meet his demand, which seems unlikely in any case, and it’s time for him to go. Either someone else will volunteer to pick up the slack, the companies using it will pay someone to pick up the slack like the author mentioned, or the software will languish, degrade, and stop being used.

    I don’t see how any of those outcomes suggest that people need to be paid for the time they voluntarily give. I could get behind finding better ways to monetarily support those who do want to get paid, but “how could it be easier to pay OSS contributors after their passion is gone?” is a lot less provocative of a headline.