Hacker News new | past | comments | ask | show | jobs | submit login

The general public is being mislead on LLMs/AI and it's dangerous. These are indeterminate systems. We CAN NOT know what they are going to output.

A product like this makes it very difficult to verify what it is telling you.

As others have pointed out, their own product launch video has several inaccuracies in it.




But, but, AI is just a tool!!! No, it's not. If something else is making decisions for you, you're the tool.


Isn't this just the Halting problem? No software has the property of being "determinate".


I dunno man. I can guarantee that this program will either not run or output "Hello world" on your screen:

    #include <iostream>

    int main() {
        std::cout << "Hello world";
        return 0;
    }
Seems like you can determine for some programs what they will do.


Yes we can determine that but the idea of the halting problem is that there is no general algorithm that can determine if a program will halt or not. So if we're including the human mind as the algorithm, then there exists a program that we cannot determine if it will halt or not. Theoretically.


Termination != deterministic




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: