• besselj@lemmy.ca
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    1 month ago

    Call it what you will, but all signs seem to indicate that generative AI is simply not as profitable as the evangelists want it to be.

    • Voroxpete@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      Not even remotely. LLMs have failed to find any viable market fit.

      The problem continues to be hallucinations and limited utility. This is compounded by the fact that LLMs are very expensive to run. The latter problem wouldn’t really be a problem if LLMs were truly capable of replacing a human employee, but they’re not. They’re just too unreliable for any serious enterprise grade application, and they’re too expensive for any low severity application.

      For example, as a coding assistant, a lot of people quite like them. But as a replacement for a human coder, they’re a disaster. That means you still have to employ the expensive human, and you also have to pay an exorbitant monthly fee for what amounts to a very cool search engine.

      There are tonnes of frivolous applications where they work really well. The AI girlfriend stuff, for example. A chatbot that sexts you is a very sellable product, regardless of how icky it might seem to some people. But no one is going to pay over $200 / month for it (as an example, ChatGPT still doesn’t make a profit at their $200/month tier).

      LLMs are too unreliable to make anything better than toys, but too expensive to sell as toys.

      • hubobes@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 month ago

        That is not true, we use Vision/LL Models to extract data and structure said data from documents. It is amazing how easy it is compared to OCR and needing to know where which information is located.