• 0 Posts
  • 137 Comments
Joined 1 year ago
cake
Cake day: July 14th, 2023

help-circle
  • theoretically they can

    Is this a purely theoretical capability or is there actually evidence they have this capability?

    it’s already been proven that they can tap into anyone’s phone

    Listening into a conversation that you’re intentionally relaying across public infrastructure and gaining access to the phone itself are two very different things.

    The use of proprietary software in literally everything

    1. Speak for yourself. And let’s be real, if you’re on Lemmy you’re 10 times more likely to be running Linux.
    2. Proprietary != closed source
    3. Do you really think that just because something is closed source means that it can’t be analyzed?

    the amount of exploits the NSA has on hand

    How many zero-day exploits does the NSA have? How many can be deployed remotely and without a nontrivial action by a user?

    what’s stopping the NSA from spying this much?

    Scale, capacity, cost, number of employees

    —-

    I’m not saying we shouldn’t oppose government surveillance. We absolutely should. But like another commenter pointed out, I’m much more concerned with the amount of data that corporations collect and have.


  • reasonable expectations and uses for LLMs.

    LLMs are only ever going to be a single component of an AI system. We’ve only had LLMs with their current capabilities for a very short time period, so the research and experimentation to find optimal system patterns, given the capabilities of LLMs, has necessarily been limited.

    I personally believe it’s possible, but we need to get vendors and managers to stop trying to sprinkle “AI” in everything like some goddamn Good Idea Fairy.

    That’s a separate problem. Unless it results in decreased research into improving the systems that leverage LLMs, e.g., by resulting in pervasive negative AI sentiment, it won’t have a negative on the progress of the research. Rather the opposite, in fact, as seeing which uses of AI are successful and which are not (success here being measured by customer acceptance and interest, not by the AI’s efficacy) is information that can help direct and inspire research avenues.

    LLMs are good for providing answers to well defined problems which can be answered with existing documentation.

    Clarification: LLMs are not reliable at this task, but we have patterns for systems that leverage LLMs that are much better at it, thanks to techniques like RAG, supervisor LLMs, etc…

    When the problem is poorly defined and/or the answer isn’t as well documented or has a lot of nuance, they then do a spectacular job of generating bullshit.

    TBH, so would a random person in such a situation (if they produced anything at all).

    As an example: how often have you heard about a company’s marketing departments over-hyping their upcoming product, resulting in unmet consumer expectation, a ton of extra work from the product’s developers and engineers, or both? This is because those marketers don’t really understand the product - either because they don’t have the information, didn’t read it, because they got conflicting information, or because the information they have is written for a different audience - i.e., a developer, not a marketer - and the nuance is lost in translation.

    At the company level, you can structure a system that marketers work within that will result in them providing more correct information. That starts with them being given all of the correct information in the first place. However, even then, the marketer won’t be solving problems like a developer. But if you ask them to write some copy to describe the product, or write up a commercial script where the product is used, or something along those lines, they can do that.

    And yet the marketer role here is still more complex than our existing AI systems, but those systems are already incorporating patterns very similar to those that a marketer uses day-to-day. And AI researchers - academic, corporate, and hobbyists - are looking into more ways that this can be done.

    If we want an AI system to be able to solve problems more reliably, we have to, at minimum:

    • break down the problems into more consumable parts
    • ensure that components are asked to solve problems they’re well-suited for, which means that we won’t be using an LLM - or even necessarily an AI solution at all - for every problem type that the system solves
    • have a feedback loop / review process built into the system

    In terms of what they can accept as input, LLMs have a huge amount of flexibility - much higher than what they appear to be good at and much, much higher than what they’re actually good at. They’re a compelling hammer. System designers need to not just be aware of which problems are nails and which are screws or unpainted wood or something else entirely, but also ensure that the systems can perform that identification on their own.







  • Terrible article. Even worse advice.

    On iOS at least, if you’re concerned about police breaking into your phone, you should be using a high entropy password, not a numeric PIN, and biometric auth is the best way to keep your convenience (and sanity) intact without compromising your security. This is because there is software that can break into a locked phone (even one that has biometrics disabled) by brute forcing the PIN, bypassing the 10 attempts limit if set, as well as not triggering iOS’s brute force protections, like forcing delays between attempts. If your password is sufficiently complex, then you’re more likely to be safe against such an attack.

    I suspect the same is true on Android.

    Such a search is supposed to require a warrant, but the tool itself doesn’t check for it, so you have to trust the individual LEOs in question to follow the law. And given that any 6 digit PIN can be brute forced in under 11 hours (40 ms per entry), this means that if you were arrested (even for a spurious charge) and held overnight, they could search your phone without you knowing.

    With a password that has the same entropy as 10 random digits, assuming no further vulnerabilities allowing them to speed up the process, it could take up to 12 and a half years to brute force it. Make it alphanumeric (and still random) and it’s millions of years - infeasible within our lifetime - it’s basically a question of whether another vulnerability is already known or is discovered that enables bypassing the password entirely / much faster rates of entry.

    If you’re in a situation where you expect to interact with law enforcement, then disable biometrics. Practice ahead of time to make sure you know how to do it on your phone.









  • Thanks for clarifying. Phrased / thought of as “a situation wherein X happens is immoral,” it makes sense.

    My confusion came from not doing that, even after reading the “Remember:” text in the comment, thanks to my conflating my personal belief that the individuals who are part of corporations that purchase houses in mass and rent them out are behaving immorally (vs being actors in an immoral situation) being adjacent with a statement about an individual renting out a room.

    That concept of morality feels more similar to what I think of as “fairness” (though not an exact match) than to individual morality.

    I feel like there must be a different word used to convey the moral judgment of someone who isn’t doing the best they can within the framework - i.e., someone who is choosing to exploit laborers for profit in excess of anything they could use for themselves.