• 0 Posts
  • 91 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle





  • AI made those discoveries. Yes, it is true that humans made AI, so in a way, humans made the discoveries, but if that is your take, then it is impossible for AI to ever make any discovery.

    if this is your take, then lot of keyboard made a lot of discovery.

    AI could make a discovery if there was one (ai). there is none at the moment, and there won’t be any for any foreseeable future.

    tool that can generate statistically probable text without really understanding meaning of the words is not an intelligence in any sense of the word.

    your other examples, like playing chess, is just applying the computers to brute-force through specific mundane task, which is obviously something computers are good at and being used since we have them, but again, does not constitute a thinking, or intelligence, in any way.

    it is laughable to think that anyone knows where the current rapid trajectory will stop for this new technology, and much more laughable to think we are already at the end.

    it is also laughable to assume it will just continue indefinitely, because “there is a trajectory”. lot of technology have some kind of limit.

    and just to clarify, i am not some anti-computer get back to trees type. i am eager to see what machine learning models will bring in the field of evidence based medicine, for example, which is something where humans notoriously suck. but i will still not call it “intelligence” or “thinking”, or “making a discovery”. i will call it synthetizing so much data that would be humanly impossible and finding a pattern in it, and i will consider it cool result, no matter what we call it.


  • In general, “The technology is young and will get better with time” is not just a reasonable argument, but almost a consistent pattern. Note that XKCD’s example is about events, not technology.

    yeah, no.

    try to compare horse speed with ford t and blindly extrapolate that into the future. look at the moore’s law. technology does not just grow upwards if you give it enough time, most of it has some kind of limit.

    and it is not out of realm of possibility that llms, having already stolen all of human knowledge from the internet, having found it is not enough and spewing out bullshit as a result of that monumental theft, have already reached it.

    that may not be the case for every machine learning tool developed for some specific purpose, but blind assumption it will just grow indiscriminately, because “there is a trend”, is overly optimistic.




  • For example, AI has discovered

    no, people have discovered. llms were just a tool used to manipulate large sets of data (instructed and trained by people for the specific task) which is something in which computers are obviously better than people. but same as we don’t say “keyboard made a discovery”, the llm didn’t make a discovery either.

    that is just intentionally misleading, as is calling the technology “artificial intelligence”, because there is absolutely no intelligence whatsoever.

    and comparing that to einstein is just laughable. einstein understood the broad context and principles and applied them creatively. llm doesn’t understand anything. it is more like a toddler watching its father shave and then moving a lego piece accross its face pretending to shave as well, without really understaning what is shaving.