• 0 Posts
  • 14 Comments
Joined 1 year ago
cake
Cake day: June 9th, 2023

help-circle



  • I’m not the person who found it originally, but I understand how they did it. We have three useful data points: you are 2.6 km from Burger King in Italy, that BK is on a street called "Via " and you are 9792 km from Burger King in Malaysia.

    1. The upper BK in Malaysia is not censored, so we have its exact location.
    2. Find a place in Italy that is 9792 km away using the Measure Distance tool on something like Google Maps.
    3. Even though there are potentially multiple valid locations in Italy, we know you’re within 2.6 km of another BK. Florence is sensible because there are BKs near the 9792 km mark.
    4. Once we do that, we can find a spot that is both 9792 km from Malaysia BK and 2.6 km from a nearby BK on a street called “Via”, effectively finding where the image was taken.


    It’s not perfect but it works well! This is the principle of how your GPS works. It’s called triangulation. We only had distance to two points and one of them doesn’t tell us the sub-kilometer distance. If we had distance to three points, we could find your EXACT location, within some error depending on how detailed the distance information was.


  • Huh, go figure. Thanks for the info! I honestly never would have found that myself.

    I still think it should be possible to use in:channel on the channel-specific search though. One less button press and it can’t be that confusing UX-wise since you have clear intent when doing it (if anything, the fact that the two searches work differently has to be more confusing UX-wise).





  • Copilot, yes. You can find some reasonable alternatives out there but I don’t know if I would use the word “great”.

    GPT-4… not really. Unless you’ve got serious technical knowledge, serious hardware, and lots of time to experiment you’re not going to find anything even remotely close to GPT-4. Probably the best the “average” person can do is run quantized Llama-2 on an M1 (or better) Macbook making use of the unified memory. Lack of GPU VRAM makes running even the “basic” models a challenge. And, for the record, this will still perform substantially worse than GPT-4.

    If you’re willing to pony up, you can get some hardware on the usual cloud providers but it will not be cheap and it will still require some serious effort since you’re basically going to have to fine-tune your own LLM to get anywhere in the same ballpark as GPT-4.



  • isildun@sh.itjust.workstoMemes@lemmy.mlSure it is
    link
    fedilink
    English
    arrow-up
    30
    ·
    edit-2
    8 months ago

    Definitely AI generated. Look at the bottom-right of the Confederate flag. It’s all messed up, classic generative AI “artifacting” for lack of a better word for it.

    Edit: lower down in the thread the original was posted. This was upscaled (very poorly) by AI.