Hacker News new | past | comments | ask | show | jobs | submit | c0mbonat0r's comments login

does it use whisper.cpp?


if its not not end-to-end encrypted, what does that mean? whats the method that govts access these messages?


You can simply join those channels. Getting an invite is not hard, or even unnecessary, from what I hear.


End-to-end encrypted means that the server doesn’t have access to the keys. When server does have access, they could read messages to filter them or give law enforcement access.


how should a candidate respond if they arent sure what range this role should pay?


Answer honestly. That their pay expectations depend on the details of the job.


how do swe get into finance?


I see quite a few SWE jobs here in Singapore in finance, mostly realtime C++ order management. If the advertised salaries are real, they're very well-paid (300-700k USD, plus bonus).


The path of least resistance is to become a well known C++ or systems guru.

This can be done by contributing substantially to the language standard, a compiler, etc etc.

Or you could learn COBOL!


First step, understand what Finance means in the scope of technology work.


i dont have a clue



Can someone pleasee convince why i shouldn't be absolutely shit out of my mind cynical about this innovation? we are literally seeing the downfall of trust in society. and no, i dont believe i am exaggerating


We have adapted to monumental shifts in how we develop trust in society for as long as society has existed - from the printing press to photography to the Internet to CGI to ...

I don't see this as any different. We will determine new ways of establishing trust. They'll certainly have flaws, as establishing trust in a society always has, but we'll learn to recognize those flaws and hopefully fix them.

Beyond that, what's the alternative? Banning the technology? That doesn't seem feasible for various reasons, not least of which is it isn't going to stop bad actors. Another pretty good reason is it's just not really possible - anyone with enough compute can build LLMs now.

As a bit of an aside, why hasn't society fallen yet? I mean, ChatGPT has been around for a couple years now, and I've been hearing about how LLMs are the single greatest threat to civilized society we've ever faced... yet they don't seem to have had a major impact.


The big difference is, there was a very high bar to forging photographs, and most news was gated (with the ability to easily find, and sue those guilty of slander/libel).

Now it's utterly simplistic to forge, to libel, to slander, and there is no easy path in many cases to sue.

While you can say "yes, but..." to the above, that's the reality that we've lived with for 150 years, less extremely rare edge cases. All this has changed over the course of a couple of decades, with most of that change in the last 10, and focuses on the last 2 years.

Beyond that, it took significant effort and labour to create fake stories and images. People had to be experts, or be wordsmiths. Now, click click, and fake generated stories abound. In fact, they're literally everywhere. There's absolutely no comparison here.

Now, in the time it used to take one person to generate one fake story, you can generate millions and trillions if you have the cash. Really, it's the same problem with spam phone calls, and with spam email.

You didn't get 1000 spam letters in the mail in the 80s, because that cost money. Email was free, thus spam became plentiful. The same with spam phone calls, it cost hard cash for each call, now it's pennies per hundreds of automated calls, so spam phone calls abound.

The same is happening with all content on the internet. Realistically the web is now dead. It's now gone. Even things such as wikipedia are going to die, as over the next 2 to 3 years LLM output will become utterly and completely indistinguishable in all aspects.


> The same is happening with all content on the internet. Realistically the web is now dead. It's now gone. Even things such as wikipedia are going to die, as over the next 2 to 3 years LLM output will become utterly and completely indistinguishable in all aspects.

Like I said, people have been saying this exact thing for a couple years now. I'm sure I'll be hearing the same in a couple more.


Compared to the 90s, and the 00s, the web is useless.


IMO will just shift power back to journalists. Not the worst outcome.


Journalists need a system of digitally signing "News" as testimony so news outlets can verify the source of information.

We should train users to ignore news from unverified sources.

We should observe and track the reputation of journalists, and stop broadcasting testimony that is untrustworthy.


Luckily, the same person bringing this tech to market (Sam Altman) sells the solution to the trust problems his tech will create (WorldCoin).


If you mean voice cloning, they aren't bringing that tech to market. (Someone else will, though.)

Similarly, Google doesn't let you use face matching in image search to find every photograph of you on the web, even though they could, and quite similar technology is built into Google Photos.


We are just finally seeing the mainstream get the idea they shouldn't trust anything on the internet. Long overdue.


Before these developments it was impossible to fake a politician saying arbitrary stuff. This is a major shift, there's no denying that.


That's the thing though, it wasn't impossible, just harder - but people believed whatever they saw, so it was done regardless of the costs. Now it's easier and people are finally getting sceptical, as they should've 20+ years ago.


Deep faking voices has been real and super easy for years via 11labs etc. This doesn’t change that at all.


> Before these developments it was impossible to fake a politician saying arbitrary stuff.

Voice impersonation has been possible forever, actually; and it use for misinformation (as well as less nefarious things like entertainment) is hardly novel. (Same with impersonation that goes beyond voice.)


I think the other comments make a good argument about how other forms of technology have also degraded trust, but that we've found a way through. I'll also add that I think one potential way we could reinstate trust is through signed multimedia. Cameras/microphones/etc could sign the videos/audio they create in a way that can be used to verify that the media hasn't been doctored. Not sure if that's actually a feasible approach, but it's one possibility.


It's feasible with advanced enough tech. The hard part isn't getting cameras to sign the files they produce. The hard part is to preserve the chain of custody as images are cropped, rescaled, recompressed etc. You can do it with tech like Intel SGX. But you also need serious defense of the camera platforms against hacking, of the CPUs, of the software stacks. And there's no demand. News orgs feel they should be implicitly trusted due to their brands, so why would they use complicated tech to build trust?


They might use complicated tech because things are changing due to AI in a way that could degrade their brands trust. By having camera-signed videos, when folks create eg deep fakes of their news anchors/brands, there's a way for consumers to verify what's real. It lets their brands become more trustworthy.

Yeah, preserving the chain of custody is hard. I was thinking there are a few options: (1) the signature of the original video could be attached even after editing/compression, and then a news org would let you look up the signature. That way if someone copied the signature and stuck it on a fake video, you could see the original video that actually passes with that signature, and determine if something has been doctored. Or (2), you could have editing software add a signature verifying the edits made: eg compression, rescaling etc.

And then a law to make it illegal to remove/tamper/valsify a signature, like we have for DVDs, to allow some form of prosecution. The hardware stack is a little easier to protect; the software stack less so. But if we can do it with things like eg browser DRM or http signatures, maybe we can with media editing software? But I'm not versed enough in Cryptography to really know.

And cool will read up on intel SGX, thanks!


I don't think there's any need for laws here. After all the whole point of a digital signature is that if it's removed or tampered with, that's detectable (assuming you expect it to be signed in the first place).

I've thought about this sort of approach many times in the past, and also done a lot of work with SGX and similar tech that implements remote attestation (RA). So I know how to build this sort of system conceptually. RA lets you do provable computations where the CPU can sign data, and the "public key" contains a hash of the program and data along with certificates that let you check the CPU was authentic. And it runs the program in a special CPU mode where the kernel and other things can't access the memory space. That's all you need to do verifiable computation.

So to preserve chain of custody you just have a set of filters or transforms that are SGX enclaves running ffmpeg or whatever, and each one attaches the attestation data to the output video which includes a hash of the input video. Then you gather up a certificate+signature over the original raw video from the camera (the cert is evidence the camera is authentic and the key is protected by the camera - you can get this from iPhone cameras), then an org certificate showing it came from a certain company, and then the attestation evidence for each transform. A set of scripts lets people verify all the evidence.

The problem is, after doing some business case analysis, I concluded it would only really be useful in some very small and specific cases:

1. Citizen journalists who are posting videos online that then get verified by news orgs. So the other way around. In this case the origin camera would use iPhone App Attestation to generate the source certificate, and all the fancy attested transform stuff isn't really important because it's the news org doing the transforms and doing the verifying.

2. Phone cam shots for insurance and other similar use cases. There is some business potential here, but it'd be sales force heavy as nobody knows the tech exists and deepfake fraud may not be a big enough problem for them to care (yet ...). If someone is looking for a startup idea, have this one for free.

3. Very new news companies that don't have any reputation yet and want to stand out from the crowd.

The thing is, for (3) or any place where a news org wants to increase the trust of the viewers, you don't need cryptography. That's just over-complicating things. You can just put a short random four letter code into the chyron that's unique to that particular shot you see on screen. Then on your website you have a page where the original unedited files can be downloaded by supplying the code. If you use cameras that produce cryptographic evidence like timestamps that's gravy, and for browny points you could publish video hashes into an unforgeable replicated log to stop you backdating footage. For most people that will be more than good enough. The sort of thing that causes people to lose trust in media is stuff not CNN broadcasting outright deepfakes, although that will happen eventually, but when they engage in selective editing, drop stories entirely, use archive footage and misrepresent it as something new etc.

The worst kind of fakery I've seen mainstream media engage in was Channel 4 UK's recent broadcast of a fake news segment, in which they "secretly filmed" someone who was pretending to be a racist Reform activist. People on X swiftly discovered that the person on-screen wasn't an activist at all but a professional actor, who had been putting on a fake accent the whole time (that he even advertised on his website). It looks for all the world like C4 broadcast entirely and truly fake news, knew they were doing it, and when they were called on it they just flat out refused to investigate knowing the British establishment was behind them all the way, as Reform is unpopular with the civil servant types who are supposed to police the media.

Unfortunately, for that kind of fakery there is no technological solution. Or, well, there is, but it's called social media + face recognition, and we already have it.


Why would the downfall of the largely unjustified and frequently harmfully exploited kinds of trust that have previously existed be bad?


The way I see it LLMs are just making it more obvious to see all the flaws with the existing levels of trust. Humans have never had access to universal truths, or universal ways of validating anything. Any claim anybody makes could be intentionally or unintentionally deceitful or untrue. The idea that there are sources you can trust to do your thinking for you is the more dangerous illusion in my opinion, and I’m not convinced that society will be harmed by poking some holes through it.


> The idea that there are sources you can trust to do your thinking for you is the more dangerous illusion in my opinion, and I’m not convinced that society will be harmed by poking some holes through it

There is no alternative to this idea. It is completely impossible for an individual to possess all of the knowledge of everything that affects their lives. The only option for getting some of this information is going to trusted sources that compile it and present some conclusions.

This applies just as much to scientific knowledge as it does to medicine or to politics.

If you want to avoid trusting any authority, it's hard to even confirm that North Korea exists. Confirming that it is ruled by an authoritarian regime and that it possesses nuclear weapons is impossible. And yet it's a trivial bit of info that everyone agrees on - imagine what avoiding trusted authorities would do to knowledge about other more subtle or more controversial topics.


> There is no alternative to this idea.

I said do your thinking for you, not do your information gathering for you.

I would suggest that you do not trust any single source to only ever tell you things that are true. If there’s a topic you want to know something about it’s a much better course of action to look at multiple different sources, and do your own thinking to come to your own conclusions.

There are no authorities who can reliably take on this role for you, and LLMs don’t change this. The same is true with science. Even prior to LLMs, the replication crisis should have shown that a single paper on any topic can’t be relied on to contain any truth (the same would be true even in the absence of a replication crisis for that matter).


Oh, then I misunderstood. I thought you were against the notion of trusting a source for information, at all.

I very much agree with your actual point - that no source should be trusted absolutely, and that the only way to get a decent-to-solid idea on a topic is to consume multiple sources on that topic.

However, the problem is that even then people have relatively little time. It's important to have sources that one can rely on to be relatively accurate with a high probability, to get some vague idea about a topic you're not deeply invested in, but do care about somewhat. And I think this is where LLMs can hurt the most.


> The idea that there are sources you can trust to do your thinking for you is the more dangerous illusion in my opinion

The difference between economically successful countries like the US and the peripheral countries is we are a high-trust society.

I don't spend 100 hours chemically testing my food because I have faith it is safe to eat. I don't waste money on scam after scam because I have faith most businesses are legitimate. If I'm a business, I can order stuff and more stuff and I trust the spec.

Our outsourcing of that trust to other people is what makes us economically successful.

Other countries which don't have this trust focus on basic tasks. Gathering food, water, shelter, and basic infrastructure. Because ultimately every man is out for himself. They aren't building software and airplanes and whatnot. Because as complexity increases, the more people are involved and therefore the most trust is required. Trust is required because of the fundamental limitations of human meat space - we have limited time and survival needs.


Attacks on trust have been around for as long as we have had trust. Generative AI makes some attack vectors easier but it's nothing new: if your trust is earned using a voice you recognise then you model for trust has been broken since before most of us were born.

https://en.wikipedia.org/wiki/Impersonator


I think you're seeing the lack of trust exposed.

Contracts allow for recompense if there is mistrust. We've been signing contracts forever.

Every automated system needs someone to jail in the event of failure.


If everyone is too cynical, then companies like OpenAI won't be able to make any money on it. You wouldn't want that, right??


what model are you using?


isnt the twitter api super expensive? how did you get around that?


if i asked for advice on buying a car would you recommend me a steel factory?


The reasonable answer lies somewhere between recommending a driver and recommending a steel factory. Proprietary solutions leans to former and FOSS solutions leans to latter.


okay


not sure why this is news. the babies are given "doses of peanut powder each day for at least two years"..

"world-first peanut allergy treatment".. thats like saying milk is a "treatment" to get your daily calcium. all theyre doing is feeding them peanut butter so their bodies get used to it from an early age.

israel has a lower peanut allergy per capita because the most popular snack there is bamba (peanut snack), any parent can do the same by offering more food variety to their children

https://www.sciencedirect.com/science/article/abs/pii/S22132....


It's news because this is a treatment for a condition that kills people. A few hundred or so people a year in the US alone: https://health.howstuffworks.com/diseases-conditions/allergi...


>not sure why this is news.

The babies being given the treatment are allergic to peanuts. This is the opposite approach to recommendations, including in Israel.

> israel has a lower peanut allergy per capita because the most popular snack there is bamba

Infant allergy rates are similar in Israel, however a specific dip in peanut allergies in Israel was a motivating factor for looking into this approach.

>any parent can do the same by offering more food variety to their children

This can kill an infant with a peanut allergy. The science on infant allergy is far ahead of randomly experimenting with your child.

Sources:

https://www.jaci-inpractice.org/article/S2213-2198(20)30825-...

https://www.nature.com/articles/d41586-020-02782-8


If children regularly suffered from calcium deficiency and were suffering ever increasing rates of calcium deficiency, it would also be news.

This is a new and novel approach that few are presently doing deliberately.


Following the doctor's advice, I've been given as a kid regularly a strawberry (or pieces, can't remember) to overcome my strawberry allergy. It worked fine, now I have zero troubles with them. I haven't researched how many allergies can be addressed this way, but for my sample size of one it just worked as the doctor said.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: