Hacker News new | past | comments | ask | show | jobs | submit | gsuuon's comments login

This makes sense, effective post-mortems don't focus on assigning blame. They try to identify and fix the problem. The issue with AI is that it's such a black-box right now, this process is likely not feasible. If an AI makes a decision that turns out to be wrong, there's no reliable way to identify and fix the problem. You can prompt engineer or finetune or re-train the model, but in the end you can only hope that issue has been fixed. I think the black-box nature of AI makes it different from other systems we use.


I think this could be one of the more legitimate uses of blockchain - distributed communications, contacts, and a refundable pay-per-call system to make spam calling uneconomical. Communication in general does desperately need an overhaul, phones are effectively useless as phones nowadays.


Is it just me or does it seem like the Catholic church might have a better grasp on technology than the US government?

  46. While responsibility for the ethical use of AI systems starts with those who develop, produce, manage, and oversee such systems, it is also shared by those who use them. As Pope Francis noted, the machine “makes a technical choice among several possibilities based either on well-defined criteria or on statistical inferences. Human beings, however, not only choose, but in their hearts are capable of deciding.”[92] Those who use AI to accomplish a task and follow its results create a context in which they are ultimately responsible for the power they have delegated. Therefore, insofar as AI can assist humans in making decisions, the algorithms that govern it should be trustworthy, secure, robust enough to handle inconsistencies, and transparent in their operation to mitigate biases and unintended side effects.[93] Regulatory frameworks should ensure that all legal entities remain accountable for the use of AI and all its consequences, with appropriate safeguards for transparency, privacy, and accountability.[94] Moreover, those using AI should be careful not to become overly dependent on it for their decision-making, a trend that increases contemporary society’s already high reliance on technology.
That is, "an AI told me so" should never be a valid excuse for anything.

I also really liked:

  62. In light of the above, it is clear why misrepresenting AI as a person should always be avoided; doing so for fraudulent purposes is a grave ethical violation that could erode social trust. Similarly, using AI to deceive in other contexts—such as in education or in human relationships, including the sphere of sexuality—is also to be considered immoral and requires careful oversight to prevent harm, maintain transparency, and ensure the dignity of all people.[124]
I think it should be a legal requirement that AI identifies itself as such given certain key-phrases and that there's no way to prompt engineer it out.

Really interesting read overall, thanks for sharing.


I'm confused, is he saying that the other voice on the call is google assistant voice ai? Or the assistant just routed the call through the google number?


I'm really hopeful for e-ink or low-fidelity devices to help ween us off media addiction. Hopefully Nothing pursues something in that space since it aligns with their mission. Would love to switch most of my work screens to e-ink and only have 'normal' screens for explicit recreation time.


I was impressed this is open source, then impressed it was done by one person, then impressed it only took 6 months, and eventually somehow impressed again that it was a _high schooler_. My mind is blown. Kudos for managing to do something insane like this. Very inspirational.


I tried this via the chat website and it got it right, though strongly doubted itself. Maybe the specific wording of the prompt matters a lot here?

https://gist.github.com/gsuuon/c8746333820696a35a52f2f9ee6a7...


I'm a little suspicious of the Isaac Newton example. The values of the better answer are very close, I wonder if the ordering holds up against small rewordings of the prompt?

Another approach if you're working with a local model is to ask for a summary of one word and then work with the resulting logits (wish I could find the article/paper that introduced this). You could compare similarity by just seeing how many shared words are in the top 500 of two queries, for example.


I wonder if "build it and they will come" is just flat out wrong, or only correct for certain products? Is there anything one can "just build" now and expect some market adoption?


    1. build something noone can live without.
    2. it already exists : market is too competitive
  
You have to build something that doesn't exist and which would be still considered as almost vital.


Fantastic that progress is being made on this. Hopefully it's enough to stem the tide, though consumer behavior wrt calls has probably already fundamentally shifted. It'll take a long _long_ time before folks who have stopped picking up are comfortable answering a random call again.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: