One foot planted in “Yeehaw!” the other in “yuppie”.
I understand the sentiment… But… This is a terribly reasoned and researched article. We only need to look at the NASA to see how this is flawed.
Blown Capacitors/Resistors, Solder failing over time and through various conditions, failing RAM/ROM/NAND chips. Just because the technology has less “moving parts” doesn’t mean its any less susceptible to environmental and age based degradation. And we only get around those challenges by necessity and really smart engineers.
The article uses an example of a 2014 Model S - but I don’t think it’s fair to conflate 2 Million Kilometers in the span of 10 years, vs the same distance in the span of the quoted 74 years. It’s just not the same. Time brings seasonal changes which happen regardless if you drive the vehicle or not. Further, in many cases, the car computers never completely turn off, meaning that these computers are running 24/7/365. Not to mention how Tesla’s in general have poor reliability as tracked by multiple third parties.
Perhaps if there was an easy-access panel that allowed replacement of 90% of the car’s electronics through standardized cards, that would go a long way to realizing a “Buy it for Life” vehicle. Assuming that we can just build 80 year, “all-condition” capacitors, resistors, and other components isn’t realistic or scalable.
Whats weird is that they seem to concede the repairability aspect at the end, without any thought whatsoever as to how that impacts reliability.
In Conclusion: A poor article, with a surface level view of reliability, using bad examples (One person’s Tesla) to prop up a narrative that EVs - as they exist - could last forever if companies wanted.
Well that’s pretty compelling!
Ever since the failure of Windows mixed reality, there hasn’t been many non-Meta HMD’s worth buying. At least with inside out tracking.
Maybe this will finally pressure Valve to lower the price on the venerable Index? Probably not. But one can hope!
I’m in agreement here, and given Blahaj’s trigger-happy nature when it comes to defederation, I’m not sure I care all that much.
I’ve seen them defederate so many other instances for “wrong-think” and I don’t think Snowe should feel like he’s in the wrong here.
It’s only a matter of time before they defederate from my own instance, tucson.social, because I don’t think 100% like them. I apparently support trans genocide because… checks notes… I don’t think that doxxing far right reactionaries/extremists is an effective tactic for garnering sympathy and building a movement.
Yup, that’s it. Apparently that opinion makes you a Nazi sympathizer in these circles.
I work for another distributed database company. I can say that it’s much harder to convert cockroachDB customers than Yugabyte customers. Given that, I’d think that CockroachDB is likely the more vetted solution. Sure it’s new (2017), but it’s not THAT new.
IIRC, MySQL (and PosgreSQL) is pretty much limitted to a write/read-replica sort of horizontal scaling. Other SQL engines have better support for multi-master configurations.
However, these types of configurations are usually tied to licensing - especially for Microsoft SQL server and OracleDB.
As another commenter suggested, there is Yugabyte and CockroachDB as well - of those two I think CockroachDB is the more mature product. And they’re one of the fiercest competitors for the company I work for too.
I cannot speak to “Battle Tested-ness” of CockroachDB, but given it’s been around for a few years now, I don’t think it’s quite as risky as other comments have indicated. Also, they’re doing something right if we haven’t been able to convert many CockroachDB customers.
Nope - full fat install on hardware - as I said in the post.
Again, just so you don’t miss the crucially important context - I’m an advanced user. I typically run vanilla arch or endeavor, both of which do not have these issues. Not to mention, I know that many of these are a result of adding so many repositories on top of the base Arch ones - at least as upgrades are concerned.
If this was in a VM I would go to great lengths to specify as such.
Lucky! I wish I had symmetrical fiber with all the ports available.
I totally have a server capable of hosting a LOT of things but lack the upload to make use of it. I’m considering transferring to a rack mount and sending it to be colocated at a datacenter within driving distance.
You missed one:
ISP - Internet Service Provider
On a technical level, user count matters less than the user count and comment count of the instances you subscribe to. Too many subscriptions can overwhelm smaller instances and saturate a network from the perspective of Packets Per Second and your ISPs routing capacity - not to mention your router. Additionally, most ISPs block traffic traffic going to your house on Port 80 - so you’d likely need to put it behind a cloudflare tunnel for anything resembling reliability. Your ISP may be different and it’s always worth asking what restrictions they have on self-hosted services (non-business use-cases specifically). Otherwise going with your ISP’s business plan is likely a must. Outside of that, yes, you’ll need a beefy router or switch (or multiple) to handle the constant packets coming into your network.
Then there’s a security aspect. What happens if you’re site is breached in a way that an attacker gains remote execution? Did you make sure to isolate this network from the rest of your devices? If not, you’re in for a world of hurt.
These are all issues that are mitigated and easier to navigate on a VPS or cloud provider.
As for the non-technical issues:
There’s also the problem of moderation. What I mean by that is that, as a server owner you WILL end up needing to quarantine, report, and submit illegal images to the authorities. Even if you use a whitelist of only the most respectable instances. It might not happen soon, but it’s only a matter of time before your instance happens to be subscribed to a popular external community while it gets a nasty attack. Leaving you to deal with a stressful cleanup.
When you run this on a homelab on consumer hardware, it’s easier for certain government entities to claim that you were not performing your due diligence and may even be complicit in the content’s proliferation. Now, of course, proving such a thing is always the crux, but in my view I’d rather have my site running on things that look as official as possible. The closer it resembles what an actual business might do, the better I think I’d fare under a more targeted attack - from a legal/compliance standpoint.
And I apologize in return for the rather harsh way I came across. The common (and frutrating) nature of your comment didn’t deserve the terseness of my response.
See: every AAA big game releases lately. Even on Windows, having to nuke your graphics drivers and install a specific version from some random forum is generally accepted as fine like it’s just how PC gaming is.
Never had to do that since I was ROM hacking an old RX480 for Monero hashrates. In fact, on my Windows 11 partition (Used for HDR gaming which isn’t supported on Linux yet), I haven’t needed to perform a reinstall of the NVIDIA driver even when converting from a QEMU image to a full-fat install.
When I see those threads, it often comes across as a bunch of gamers just guessing at a potential solution and often become “right” for the “wrong” reasons. Especially when the result is some convoluted combination of installs and uninstalls with “wiping directories and registry keys”.
But, point taken, the lengths gamers will go to to get an extra 1-2 FPS even if it’s unproven, dangerous, and dumb is almost legendary.
They’re probably okay for most users, especially the gamer kind.
Eh, IDK - the amount of breakage I got simply trying to upgrade the system after a few days would probably be incredibly hostile to a less technical user/gamer.
Sure, if most things worked out-of-the-box and upgrades were seamless, I’d agree - but as it stands, it seems like you need to know Arch and Linux itself fairly well to get the most out of Garuda Linux.
I really doubt that. Again - advanced user here - with numerous comparison points to other arch based distros. I also maintain large distributed DB clusters for Fortune 100 companies.
If it was something not on the latest version - it’s not due to my lack of effort or knowledge, but instead due to the terrible way Garuda is managed.
What, am I supposed to compile kernel modules from scratch myself? Never needed to do that with Endeavour, Manjaro, or just Arch.
If Garuda’s install (and subsequent upgrade) doesn’t fetch the latest from the Arch repos, that’s on them.
EDIT: Also, these non-answers are tiresome, low effort, and provide zero guidance on any matter. I know every single kernel change since 5.0 that impacted my hardware. I have rss feeds for each of the hardware components I have, and if Linux or a distro ships an enhancement to my hardware - I’m usually aware well before it is released. If you were to point to any bit of my hardware I can tell you, for certain, what functionalities are supported, which has bugs, and common workarounds.
If you want this type of feedback to be valuable, then let me know if a new issue/regression has arisen given the list of hardware I’ve supplied.
Valuable: “Perhaps it was the latest kernel X which shipped some regressions for Nvidia drivers that causes compositor hitching on KWin”
Utterly Useless: “It’s very likely some drivers are not up to date or compatible with your system.”
I dunno, my OLED panel has some notable image retention issues - and a screensaver does appear to help in that regard.
Eh, I went back to screen savers due to my use of OLED panels. Better than a static lock-screen image for sure.
I’d like to report in as someone at the end of that process and is actually making good money.
Now I need:
More time to hang out with friends and family. 🥲
As a man who grew up with one foot firmly planted in yeehaw and the other in yuppie, I think this is brilliant!
I don’t get it either. My brother-in-law is like this. And he refused to take his kids to see Buzz Lightyear because of its “political” nature. I was a dumbfounded when I heard that. To think that representation is just some nebulous political aim.
At this rate, we should just consider any media with a kiss in it “political media.”
And I even grew up with this dude in the early 2000s. He didn’t seem like this before.
I try to forget about the guy, but it’s kind of hard because he won’t let me see the nieces because I’m too “liberal”.
I agree. I think 1440p+HDR is probably the way to go for now. HDR is FAR more impactful than a 4K resolution and 1440p should provide a stable 45ish FPS on Cyberpunk 2077 completely maxed out on an RTX 3080Ti (DLSS Performance).
And in terms of CPU, the same applies. 16 cores are for the gentoo using, source compiling folks like me. 8 cores on a well binned CPU from the last 3 generations goes plenty fast for gaming. CPU bottlenecking only really show up at 144fps+ in most games anyways.
Well, seeing that Insurgency: Sandstorm was on a sale, I just picked it up for him (and myself). Seems to have a lot in the map making scene, and that’s a really important factor for him.
It also helps that the prior Insurgency game has the most hours on his profile, by far. Gave me a good hint that he should enjoy this one.
Thanks so much!
EDIT: My dad just got back to me, and loves the gift. Apparently that’s where most of his online buddies went and still are. Nailed it!