I believe this is the referenced article:
I believe this is the referenced article:
I’ve been using FreeTube since Piped was very inconsistent for me, but I guess that’s just the nature of these services. I’ll have to check out Invidious again, last time I tried it was several years ago and I stopped using it after the main instance shut down. Is it still under active development? I remember its development status being unclear, partially because the language it uses is not super mainstream, but it’s probably changed since then.
Fortunately, Invidious, Piped, Libretube and Newpipe all exist and work flawlessly so there’s no excuse to use proprietary trash like that.
Isn’t the very point of this post that Invidious and Piped don’t work flawlessly?
Can’t you still modify and distribute Grayjay, just not commercially? I understand that still prevents the app from being considered open source, but their reasoning is valid IMO (to prevent people from making ad-infested clones on the play store, which has happened with NewPipe before).
I think what they mean is that ML models generally don’t directly store their training data, but that they instead use it to form a compressed latent space. Some elements of the training data may be perfectly recoverable from the latent space, but most won’t be. It’s not very surprising as a result that you can get it to reproduce copyrighted material word for word.
Not sure what other people were claiming, but normally the point being made is that it’s not possible for a network to memorize a significant portion of its training data. It can definitely memorize significant portions of individual copyrighted works (like shown here), but the whole dataset is far too large compared to the model’s weights to be memorized.
The big thing you get with frameworks is super simple repairability. This means service manuals, parts availability, easy access to components like the battery, RAM, ssd, etc. Customizable ports are also a nice feature. You can even upgrade the motherboard later down the line instead of buying a whole new laptop.
For reference, ICML is one of the most prestigious machine learning conferences alongside ICLR and NeurIPS.
I’m a researcher in ML and that’s not the definition that I’ve heard. Normally the way I’ve seen AI defined is any computational method with the ability to complete tasks that are thought to require intelligence.
This definition admittedly sucks. It’s very vague, and it comes with the problem that the bar for requiring intelligence shifts every time the field solves something new. We sort of go “well, given these relatively simple methods could solve it, I guess it couldn’t have really required intelligence.”
The definition you listed is generally more in line with AGI, which is what people likely think of when they hear the term AI.