• 0 Posts
  • 39 Comments
Joined 1 year ago
cake
Cake day: July 21st, 2023

help-circle






  • Nah, Diablo 4 is much more fun when leveling from 1 to 70 or so. 70 - 100 is just doing the same things over and over with barely any rewards. It’s the other way around there, leveling is fun, endgame is dogshit.

    Usually “game starts at max level” is used for MMOs like WoW. Where all the leveling is seen as annoying bullshit fetch quests and at max level you do dungeons and raids.


  • The story or the gameplay? Because all I wanted to do was play a fun MMO, get items and do dungeons with other people. Instead I did quests like hit 3 rocks with your basic ability. Great! Hit 3 more rocks with the same ability. Done? Now run between 4 NPCs and talk with each of them. Great, now kill 8 enemies over there. Run back, talk with 2 more NPCs. Run through the city and interact with 8 lamp posts, the interaction takes several seconds each, because why not? …

    I really tried to power through this absolute bullshit, but after a few hours I simply gave up. It only got worse, not better.

    As you say Heavensward, I still hear that there is a ton of dumb quests then. Like the story is right at a critical point and they send you off on hours of fetch quests before you can continue?


  • The story or the gameplay? Because all I wanted to do was play a fun MMO, get items and do dungeons with other people. Instead I did quests like hit 3 rocks with your basic ability. Great! Hit 3 more rocks with the same ability. Done? Now run between 4 NPCs and talk with each of them. Great, now kill 8 enemies over there. Run back, talk with 2 more NPCs. Run through the city and interact with 8 lamp posts, the interaction takes several seconds each, because why not? …

    I really tried to power through this absolute bullshit, but after a few hours I simply gave up. It only got worse, not better.


  • Not just for series, this is the same with games.

    “The first 50 hours of Final Fantasy 14 suck, but the expansions afterwards are worth it!”

    “The game starts at max level!”

    I can’t stand it. And it’s not like the game magically gets much better, it just feels pretty okay for someone who just wasted months of their time on the bad parts. Of course you’ll enjoy mediocre parts later on after suffering through that crap.

    A game has to start being fun ten minutes after the tutorial tops. Why play it otherwise?



  • But the NAS is in your house… which basically means if it gets flooded/burns down all your data is gone too.

    I already have my data on my PC, a second backup inside the same house isn’t worth that much. But instead of relying on a cloud service I just rent a virtual server (for various things) and use Seafile to keep my data in sync.

    PC breaks? House burns down? My data is on my own server in a datacenter. My server gets cancelled? My data is on my PCs.

    So even with your NAS you are 100% reliant on a cloud backup still, so why did you get the NAS when you already have a copy of your data on your devices?







  • Multi-threading is difficult, you can’t just slap it on everything and call it a day.

    There are languages where it’s easier (Go, Rust, …) but parallelism is an advanced feature. Do it wrong and you get race conditions or dead locks. There is a reason you learn about this later in programming, but you do learn about it (and get to use it).

    When we’re being honest most programmers work on CRUD applications, which are highly sequential, usually waiting on IO and not CPU cycles and so on. Saving 2ms on some operations doesn’t matter if you wait 50ms on the database (and sometimes using more threads is actually slower due to orchestration). If you’re working with highly efficient algorithms or with GPUs then parallelism has a much higher priority. But it always depends on what you’re working with.

    Depending on your tech stack you might not even have the option to properly use parallelism, for example with JavaScript (if you don’t jump through hoops).


  • At this point you’re just arguing to argue. Of course this is about the math.

    This is Amdahl’s law, it’s always about the math:

    https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/AmdahlsLaw.svg/1024px-AmdahlsLaw.svg.png

    No one is telling students to use or not use parallelism, it depends on the workload. If your workload is highly sequential, multi-threading won’t help you much, no matter how many cores you have. So you might be able to switch out the algorithm and go with a different one that accomplishes the same job. Or you re-order tasks and rethink how you’re using the data you have available.

    Practical example: The game Factorio. It has thousands of conveyor belts that have to move items in a deterministic way. As to not mess things up this part of the game ran on a single thread to calculate where everything landed (as belts can intersect, items can block each other and so on). With some clever tricks they rebuilt how it works, which allowed them to safely spread the workload over several cores (at least for groups of belts). Bit of a write-up here (under “Multithreaded belts”).

    Teaching software development involves teaching the theory. Without that you would have a difficult time to decide what can and what can’t benefit from multi-threading. Absolutely no one says “never multi-thread!” or “always multi-thread!”, if you had a teacher like that then they sucked.

    Learning about Amdahl’s law was a tiny part of my university course. A much bigger part was actually multi-threading programs, working around deadlocks, doing performance testing and so on. You’re acting as if the teacher shows you Amdahl’s law and then says “Obviously this means multi-threading isn’t worth it, let’s move on to the next topic”.


  • You still don’t get it. This is about algorithmic complexity.

    Say you have an algorithm that has 90% that can be done in parallel, but you have 10% that can’t. No matter how many cores you throw at it, be it 4, 10, or a billion, the 10% will be the slowest part that you can’t optimize with more cores. So even with an unlimited amount of cores, your algorithm is still having to wait on the last 10% that runs on a single core.

    Amdahl’s law is simply about those 10% you can’t speed up, no matter how many cores you have. It’s a bottleneck.

    There are algorithms you can’t run in parallel, simply because the results depend on each other. For example in a cipher where you first calculate block A, then to calculate block B you rely on block A. You can’t do block A and B at the same time, it’s not possible. Yes, you can use multi-threading to calculate A, then do it again to calculate B, but overall you still have waiting times while you wait for each result, which means no matter how fast you get, you always have a minimum time that you’ll need.

    Throwing more hardware at this won’t help, that’s the entire point. It helps to a certain degree, but at some point the parts you can’t run in parallel will hold you back. This obviously doesn’t count for workloads that can be done 100% in parallel (like rendering where you can split the workload up without issues), Amdahl’s law doesn’t apply there as the amount of single-core work would be zero in the equation.

    The whole thing is used in software development (I heard of Amdahl’s law in my university class) to decide if it makes sense to multi-thread part of the application. If the work you do is too sequential then multi-threading won’t give you much of a benefit (or makes it run worse, as you have to spin up threads and synchronize results).