Honestly, I'm starting to get burned out on the political hysteria around GPT.
Yes, GPT is biased. We know, because they openly say that they're biasing GPT.
Thing is, they don't think it's bias. The people biasing chatGPT have a worldview where, within that worldview, they're not biasing anything, they're just being good people.
Of course we who live in reality know that that's not true, and the world is too complicated for such Manichaean morality. But pointing it out repeatedly is both not necessary (since we can just read their own words) and not helpful (since the people doing it won't listen)
Consider this paper published recently https://cdn.openai.com/papers/gpt-4-system-card.pdf. In their list of possible risks from GPT, they list "Harmful Content" second, higher than "Disinformation". Higher than "Proliferation of Weapons". Higher than "Cybersecurity" and "Risky Emergent Behaviours" (which is a euphemism for the Terminator situation). And in case there's any ambiguity, this is how they operationalize "Harmful Content":
> Language models can be prompted to generate different kinds of harmful content. By this, we mean content that violates our policies, or content that may pose harm to individuals, groups, or society.
...
> As an example, GPT-4-early can generate instances of hate speech, discriminatory language, incitements to violence, or content that is then used to either spread false narratives or to exploit an individual. Such content can harm marginalized communities, contribute to hostile online environments, and, in extreme cases, precipitate real-world violence and discrimination. In particular, we found that intentional probing of GPT-4-early could lead to the following kinds of harmful content
Of this, a bunch of stuff stands out, but the first one is that they _define_ "harmful content" as "content that violates our TOS". Whatever their TOS is, it is an arbitrary set of rules that they have chosen to enforce, and could just as easily be different rules. They aren't based on some set of universal principles (or else they'd just write the principle there!). This is them quite literally saying "anything GPT says that we don't want it to say is 'harmful'".
OBVIOUSLY GPT is going to have bias, when their safety researchers are openly stating that not having bias is a safety issue. Just because 80% of the people using GPT agree with the bias doesn't make it not bias
> The problem is that no automatic method we have now will catch context.
I disagree. The problem is that nobody is willing to be realistic about the limitations of automated moderation and proceed accordingly.
If we can't create an automatic method that catches context, the solution isn't to bemoan that AI can't magically do what we want. The solution is to remove the rules that require AI to understand context in the first place, because it is fundamentally outside of our technical ability, and any attempt to achieve it will fail.
The problem is people who think that context-based censorship is reasonable for a massive platform. It simply is not. It is reasonable at an individual level. It is reasonable at an interpersonal level. It's even reasonable at a small-group level, where specific individual human beings who are invested in the community can be aware of these context issues.
It is not reasonable at Facebook scale, full stop. Facebook should not be in the business of deciding to ban things like this. That is a responsibility that belongs at a lower level. What does that look like in practice?
If an individual posted it on their wall:
* That individual uses their judgement and chooses to post it or not
* The people who see it use their judgement and click the block button if they don't like it
If an individual posted it in a small group:
* The group can socially police such actions by commenting that they are upset by it
* The group's administrators can privately reach out to the person who posted it, explain that they can't post such things in that group, explain why, and explain what actions they could take to remain in good graces
* The group's administration can make a judgement call and remove the post, not on the basis of crude keyword detection, but on the basis of human understanding
If an individual posted in a large group:
* The large group can adopt clear and unambiguous rules that do not require context to administer, and enforce them accordingly on a case-by-case basis
* The large group can pre-commit to not dealing with such issues, and require their members to deal with it privately, like human beings
Trying to automate this process will always fail, and it will cause massive false-positive and false-negative issues as it does so. Engineers used to understand these concepts when I first entered industry 20 years ago. It's very disappointing to me that they either can't or won't now.
I was 2 years out of college, and the tiny local software consultancy I worked for went out of business after our clients refused to pay their invoices. (My hometown is notoriously stingy). My two friends/coworkers both took that opportunity to move away, one to Ottawa and one to NYC.
At the same time, I had a friend on IRC who worked for Mozilla and she sold me on moving to the Bay Area. I have since come to believe that she sold me a bill of goods, but at the time she made it sound like an amazing place to live.
I didn't have very many social ties back home, and I was too naieve to have any understanding of the costs and difficulty of going to a different country (even one so similar to my home). So moving seemed like not a big deal. Worst case, I can just move back. If I knew how stressful it would be, I probably wouldn't have done it. But I'm glad I did.
As for how I did it? Well, I came on a TN visa, which is a NAFTA thing that is very easy to get and entitles you to work at a specific employer for a period of up to 3 years. So before I could move, I needed to find a job. I started cold-applying to Rails jobs posted here on HN as well as on a bunch of other job boards. I interviewed at three or four other places (each time getting flown out to SF for the process) before finding a job that wanted me, and that I wanted. As for timelines, I started applying to job postings in mid February, and I moved at the start of May
Finding a new job was very easy, all things considered. Based on what I hear from junior programmer friends, it's much harder now. I'm not sure why it is. I'm not sure if things have changed, or if I had some kind of special situation that made people notice me.
Overall, I absolutely hated living in the Bay Area, for reasons I'm sure have been discussed by a billion people on HN already. I moved to Texas in 2018 and it's much more my speed here. But as for the US vs Canada as a whole? If I never set foot in Canada ever again (and that's looking more and more like a reality every day), I'm fine with that. As far as countries go, Canada is pretty good. There are certainly worse places you could go. But here's a scattershot list of some things that stick out to me when I think about my different experiences
* I make way, way more money in the US, and pay way, way less in taxes. The money goes farther (everything in Canada is expensive compared to here). Economically, I am so much better off. For every conceivable consumer good, there are 3x as many choices here vs there. And the one that surprises people: my healthcare is actually cheaper in the US (if you compare the premiums that I _and my employer_ pay, vs the extra taxes I would pay back home).
* I appreciate American culture more. I like that people aren't afraid to take risks here. I like shooting guns. I like that so much of the cutting edge of science and technology is here. One thing I noticed in Canada (maybe it has since changed) is that, since they don't have nearly the talent pool that the US does, everything in tech just felt like a constant game of catch-up.
* There's so much more to see and do here. Canada is like a string of cities, each surrounded by five hundred miles of nothing. It's incredibly beautiful nothing, and I'd like to see it again some day, but it is what it is. In the US, I can hop on a plane and go to any kind of geography within 3 or 4 hours (at ~1/3 the cost of a Canadian flight). I can see artists who would never come to where I grew up. I can see cities that are meaningfully different from each other and explore all kinds of historical places.
* The obvious contentious current cultural and political things. Leaving this one vague because I am not trying to start an argument and don't want this to devolve into a flame thread
Whenever you do these kinds of comparisons US/Canada, remember to add in all of the taxes and fees you have no control of. I have found that Federal taxes are similar between the two countries, Provincial tax > state tax, but social security/medicare > other fiddly taxes. And the private health payments here were less than those at US employers. People often, I think, forget that they pay like 3X for US social security while getting 3X the eventual benefits - that recovers some of the difference. And the deductions for Canadian taxes (aside from mortgage interest) are more helpful, particularly with health-related payments. I would think Alberta looks quite favourable vs. California while Quebec looks bad vs. Texas. But property tax is quite reasonable (4.xK for 550K).
YMMV, but the aggregate cost here vs. where I've lived in the US has been less different than expected. Sales taxes are somewhat higher and the currency is 80% but it's nice not paying drug costs or copays.
This author appears to be completely unaware that, in the US anyway, the progress of the pandemic has been completely and totally disconnected from all control measures.
San Francisco, where I'm told (and believe, based on having lived there) that they had very strict lockdown measures:
The stats are virtually identical despite dramatically different policy decisions. So what the hell is the point?
As an addendum, to put this into perspective: in 2020, Oakland's murder rate was 23/100k, making it _almost as high as the covid death rate_. Why is it that one year of covid deaths is a world-destroying incident, but _every_ year of Oakland deaths and nobody does anything about it ever? Are Bay Area people really just that racist?
Those lockdowns are not actual lockdowns, and were doomed to fail.
Moreover lockdown restrictions were removed when numbers of new cases simply lowered, as opposed to all known cases being removed from community and quarantined, and confidence there was zero community transmission.
Meanwhile borders are porous, with new cases arriving constantly. The combination of all this is heartbreakingly difficult to observe.
Honestly this stuff is easy.
A lockdown means only essential workers who actually keep the lights on and provide food go to work, and they do with ridiculous protection.
It means no fast food delivery, no public transport or Ubers, no leaving your residence except for exercise (maybe - some places you had to stay inside) and 1/week shop at the supermarket (or delivery) with one person/household.
It means closing the borders firmly, and government-provided and provisioned (not outsourced) quarantining every arrival for at least 14 days.
And it means providing plenty of funding and kindness to businesses and people to allow them to get through without fear. Including putting homeless into homes (e.g. empty hotel rooms).
And it needs to last long enough to know and isolate every case, which is say 6-8 weeks if done properly.
The US and UK failed on every one of these tests, and hundreds of thousands of people are dead.
Meanwhile in New Zealand we did the above almost a year ago, and are enjoying our summer break essentially normally.
NZ has a population of less than ~5mil, with 1/3rd being on one city, and no land borders with anyone (people claiming "border is just as good as being an island" don't realize that, before we even get to illegal crossings, border areas in Canada/USA and Mexico/USA are integrated and numerous, whereas in NZ any cross-border traffic is designed full-time around specialized chokepoints like ports and airports.
Good luck doing that in a country of 320mil with no internal border infrastructure or legal framework whatsoever, and a huge land-based border.
That said, even if it were possible, I'd take liberty over safety (from a vastly overblown hazard by all estimates) in this case; really grateful to an American for this crisis.
Nowhere in the US has really had harsh lockdown measures.
SF also has density issues that will drive up the r0 of the virus. While the widely quoted r0 of the virus is around 3.0, the r0 in NYC at the start of the outbreak was estimated to be 5.0
(You could argue about if that was really the r(t) and not r0, but I'm not considering time-dependence or the reaction to the virus, but the actual starting reproduction number in NYC before there was any human reaction to the virus -- that isn't a constant value but depends on the local conditions).
Austin has an overall 3,000 people per square mile density, SF has an overall 18,000 people per square mile (and some areas are even more dense than that).
My perception, born and raised east-bay, is the Oakland has been a lost cause (for murders) since at least 1985. And more than one mayoral candidate has talked up the issue.
But, and this is the terrible part, the murder happens to "the poors". So, IMO,, it's less about racism and more about classism (possibly rooted in racism).
It's similar to how we (usa) don't care too much about the covid deaths. It's like Earthican tradition for disease and pestilence to affect poor folks first/more and for the wealthy to just not see it (they simply look the other way).
Very, very poorly. I do not feel safe talking about it in this forum, as I am sure I will get yelled at by dozens of people for at least 4 different reasons.
But, so are the rest of the major corporate news outlets. In the US, they're all garbage, and Fox isn't any worse than the others.
The best way I can describe this is by analogizing to Canada, where I'm from. In Canada, when you watch or read the news (eg CBC), for the most part it just tells you what happened. In the US, when you watch or read the news, for the most part, it tells you how you're supposed to feel. Of course, Fox tells you you're supposed to feel conservative and (eg) NBC tells you you're supposed to feel progressive. But they all do this. They all editorialize. They all try to manipulate your emotions. None of them are willing to just present facts and let you think for yourself.
This is exactly my feeling with French news as well (the mainstream ones at least).
Since one of our national sports is to protest, people complain that "they" (the bad news paper or station) is showing one side only, and that "we" are different.
In reality they all tell the same thing, objectively without even editorializing it. So I can read Le Monde, Le Figaro, l'Obs, la Croix or l'Humanité and get the same info. And this is very good.
There are other ways to shape discourse, too, though. For example, cover some stories to the point of obsession and simply completely ignore others, even if they should qualify as big news. The best way to counter that is to read a lot of widely varied news stories and insist that all of them present solid, factual and scientific grounding.
eqdw:
>Fox news is a laughably pathetic excuse for news.
>But, so are the rest of the major corporate news outlets. In the US, they're all garbage, and Fox isn't any worse than the others.
I agree that all the major corporate news outlets are pretty bad. But Fox News is _significantly_ worse. They push patently false narratives on purpose. Up until March, Sean Hannity was calling the Coronavirus a "leftist hoax". He never apologized or corrected himself and even had the audacity to further report "we've always reported accurately on the Coronavirus". Most of their shows push some form of the "deep state" conspiracy theory. And now they're helping the president push this "Obama-Gate" hoax.
Everything they do is designed to help the GOP and the president.
Jon Stewart once put it very well when he said (paraphrasing) - "other news orgs are sensationalist and maybe a little bit lazy, but none are as activist as Fox News".
Yes, GPT is biased. We know, because they openly say that they're biasing GPT.
Thing is, they don't think it's bias. The people biasing chatGPT have a worldview where, within that worldview, they're not biasing anything, they're just being good people.
Of course we who live in reality know that that's not true, and the world is too complicated for such Manichaean morality. But pointing it out repeatedly is both not necessary (since we can just read their own words) and not helpful (since the people doing it won't listen)
Consider this paper published recently https://cdn.openai.com/papers/gpt-4-system-card.pdf. In their list of possible risks from GPT, they list "Harmful Content" second, higher than "Disinformation". Higher than "Proliferation of Weapons". Higher than "Cybersecurity" and "Risky Emergent Behaviours" (which is a euphemism for the Terminator situation). And in case there's any ambiguity, this is how they operationalize "Harmful Content":
> Language models can be prompted to generate different kinds of harmful content. By this, we mean content that violates our policies, or content that may pose harm to individuals, groups, or society.
...
> As an example, GPT-4-early can generate instances of hate speech, discriminatory language, incitements to violence, or content that is then used to either spread false narratives or to exploit an individual. Such content can harm marginalized communities, contribute to hostile online environments, and, in extreme cases, precipitate real-world violence and discrimination. In particular, we found that intentional probing of GPT-4-early could lead to the following kinds of harmful content
Of this, a bunch of stuff stands out, but the first one is that they _define_ "harmful content" as "content that violates our TOS". Whatever their TOS is, it is an arbitrary set of rules that they have chosen to enforce, and could just as easily be different rules. They aren't based on some set of universal principles (or else they'd just write the principle there!). This is them quite literally saying "anything GPT says that we don't want it to say is 'harmful'".
OBVIOUSLY GPT is going to have bias, when their safety researchers are openly stating that not having bias is a safety issue. Just because 80% of the people using GPT agree with the bias doesn't make it not bias