I love Yandex. They are the best search engine by far for politically controversial topics. They also release a language model to benefit everyone even if it says politically incorrect stuff. They also name their projects "cocaine" probably to perhaps to prevent western competitors from using them.
You look at OpenAI and how they don't release their models mainly because they fear "bad people" will use them for "bad stuff." This is the trend in the west. Technology is too powerful, we must control it! Russia is like... Hey, we are the bad guys you're talking about so who are we keeping this technology from? The west has bigger language models than we do, so who cares. Also their attitude to copyright and patents, etc. They don't care because that's not how their economy makes money. Cory Doctorow's end of general purpose computing[1] and locked down everything is very fast approaching. I'm glad the Russians are around and aren't very interested in that project.
Search Google and Yandex for "2020 election fraud." The results are VERY different. The Zach Vorhies leak shows that Google regularly does blatant censorship for political purposes.[1]
Totally, just like how if you want to find out what really happened in Tiananmen Square in 1989, your best bet is Baidu. Totally different results than what Google gives you!
I sincerely have deep respect for Yandex for releasing this, and Baidu for some of the amazing research they've released over the years, but both are deeply deeply beholden to their local governments in a way that is incomparable to the relationship between Google and the US government.
Remember that the NSA was literally digging up and tapping fiber around Google data centers in a secret program called MUSCULAR because they didn't think Google was being cooperative enough when handing over data that they were requesting.
Google: 118M results. Top link is the best resource on verified election fraud cases.
Yandex: 9M results. The top two links are pretty suspect. Top link promotes Dinesh D'Souza's 2000 Mules documentary in the banner which at best is a one-sided take on election fraud. At worst, very misleading.
This is a weird comment because yes, that's exactly what the above person was saying. It shows results that google won't give you.
Secondly, I've yet to see any criticisms of 2000 mules data that aren't addressed by the stringency in the analysis they claim to have done.
I thought the information they presented was extremely valuable. Are we going to overturn an election at this point? No. But the vulnerabilities to the mail-in ballots were obvious, then lied about, then ignored, then clearly taken advantage of. I want to live in a democracy because voting matters. I especially don't want NPO's destroying this by taking advantage of flawed voting infrastructure.
If there are legitimate criticisms of the methods, the data, or anything else coming out of this film I expect a legitimate presentation that can break it down using the actual data in question. All I've seen so far is shilling and gas-lighting.
If you have any evidence, any evidence at all, of significant mail-in ballots fraud, then you should write it up and publish it; and even present it to the USDOJ, because you would have succeeded where Trump's highly-paid teams of lawyers failed.
It's not about whether it happened or not, it's whether it's reported or not.
I personally believe (with obviously no proof) there was definitely fraud going on, on both sides. With such an archaic system and such a great economic and power incentive, you would be stupid not to do it. For sure mail in ballots made it even easier than in the past.
I've heard about Russian hacking the elections after Trump won for a good 2 years.
Why should 2000 Mules be promoted in results at all? It's total BS, it's progenitor is a criminal that's was guilty of Election Campaign Finance Fraud.
I don't know man, "thegatewaypundit.com" as a top reputable source? seems to me like it's not "honest two-sided results" but just, well, a rather random mix of result of widely varying quality. Mad Altavista vibes!
What I'm trying to say is that even if you believe that "was the 2020 US election stolen?" is worth debating, which it isn't, the yandex results are shit.
If you get all your information through mainstream channels, and you don't want to see anything contradicting those channels then you should continue to use Google because they explicitly implement the algorithms on controversial topics to prefer mainstream news sources[1]. What I mean by "better" in terms of controversial searches is that on controversial matters, it will rank the searches the same way it does for all other searches. I mean yeah, I don't have access to the internal code base of Yandex, but it certainly feels more organic.
Btw Wikipedia’s first few sentences on Breitbart are not inspiring
> Its journalists are widely considered to be ideologically driven, and much of its content has been called misogynistic, xenophobic, and racist by liberals and traditional conservatives alike.[10] The site has published a number of conspiracy theories[11][12] and intentionally misleading stories.[13][14]
This is the association fallacy, which is, unfortunately, how most people determine what to believe these days.
An absurd example of this fallacy would be, Wikipedia, which you cite, has articles that indicate tobacco smoking may cause disease. The nazis were also anti-smoking[1]. Therefore Wikipedia is Nazi propaganda and you should not trust anything on there.
It is not the association fallacy; the role of a news site is to provide news, which includes fact-checking the work of their "journalists."
If Breitbart pulled a Fox News and argued in court that their goal was to entertain and not inform, then you have a point! But until then, you have a terrible misunderstanding of journalistic integrity and what it means for a publisher to attach their name to a journalist's work.
> This is the trend in the west. Technology is too powerful, we must control it!
I take it that you're either too young or too untraveled to be aware of the level of state control of technology in "the east". Xerographic machines, mimeographs, and other similar reprographic devices used to be highly controlled machinery behind the Iron Curtain. This is absolutely not something exclusive or even peculiar to "the west".
>> They are the best search engine by far for politically controversial topics
FYI, they are Russian subject that follows ALL their censorship laws (and oh boy do they have a lot of it).
>> probably to perhaps to prevent western competitors from using them
The irony here. All yandex products are exact copies of western, adjusted to local market.
Actually they're not, some of the Yandex products are actually better and pretty innovative (ignoring the political stuff). Maps and Go are especially good. Ditto with Russian banking apps, they put American bank apps to shame.
At the same time, from a consumer perspective, this sacrifice to internal political pressures won't hamper your usage of the product. It's not like if international borders were the most important annotation on a map for everyday use.
Edit: in fact, I just checked, yandex maps still shows state borders.
It doesn't distinguish state and region borders anymore, like it did before. No borders on the overview map either. Just zoomed into a random place on the map, and I can tell where Turkey ends and Bulgaria begins only because the city names are different.
I didn't see it as I was looking at France. It's weird, because at large scale there are no borders, at medium scale there is a weird mix of national and local borders (Western EU countries have state-level borders, RU/BY/UA/US/CN have local borders, ...).
And to take your example, I have to zoom quite close for BG/TR to switch from state-wide to local borders.
> it's a Russian company breaking Russian laws and getting away with it
I don't think you've lived in Russia if you need to ask that question. Breaking the law and getting away with it is a way of life in Russia, that goes for all institutions and social strata
Breaking random laws? Sure. Breaking laws specifically made to enable central government control of independent media? Uh, how do you do that? Have you not noticed for how minor things regarding freedom of expression have been Russian people getting into jail recently?
By being on the internet. Russia has always been good at literally hitting you with a physical club if you're crazy enough to take a sign to the streets, but the Russian state doesn't understand the internet, or really anything that's sort of underground or intangible.
There's a reason the country is probably the world's largest place for all things piracy related, scihub and so on. It's not just laxer IP laws, it's also that tech in particular has always skirted all kinds of regulation freely, it's why the country has a relatively healthy tech industry despite at times suffocating regulation. The prevalence of cybercrime in the country is another example of it. Being censorious doesn't make you competent.
They literally have blocklist of sites that kremlin doesnt like and it acts somehow similar to yandex news in this part. The difference here is more that google filters stuff for usa and yandex for russia
This is one of the funniest threads I’ve ever seen on this website. People are yelling at eachother about the CIA and the legitimacy of Israel and Assange and the definition of fascism and… anything that pisses anybody off about international politics in general. In a thread about a piece of software that’s (to me and likely many others) prohibitively expensive to play around with.
Anyway I hope somebody creates a playground with this so I can make a computer write a fan fiction about Kirby and Solid Snake trying to raise a human baby on a yacht in the Caspian Sea or whatever other thing people will actually use this for.
To add a voice of skepticism. The recent rush to open source these models may be indicative that the tens of millions that’s spent training these things has relatively poor roi. There may be a hope that someone else figures out how to make these commercially useful.
There are tons of commercial uses for these models. I've been experimenting with an app targeted toward language learners [1]. We use large language models to:
- Generate vocabulary - e.g. for biking: handlebars, pedals, shifters, etc
- Generate translation exercises for given topic a learner wants to learn about - e.g. I raised the seat on my bike
- Generate questions for the user - e.g. What are the different types of biking?
- Provide more fluent ways to say things - I went on my bike to the store -> I rode my bike to the store
- Provide explanations of the difference in meaning between two words
And we have fine tuned smaller models to do other thing like grammar correction, exercise grading, and embedded search.
These models are going to completely change the field of education in my opinion.
I started to work on something similar but way behind your project. I really believe AI models can help us as humans learn better! Do you have a blog or any other writeups on how you approached these problems?
We're using these at where I work (large retail site) to help make filler text on generated articles. Think the summary blurb no one reads at the top. As for why we're writing these articles (we have a paid team that writes them too), the answer is SEO. This is probably the only thing I've seen done with a text model in production usage. I'm not 100% sure what model they're using.
Yeah I'm not a huge fan of it. I'll never forget the look in our UX person's eyes when she realized that our team doesn't exist to make customer's experience better (there's a ton of other teams for that) but to make Googlebot's experience better. Right now we're in the process of getting publishers you've heard of to write blurbs for best lists, but we're supplying the products so it's not really a best list.
I can't say I'm a big fan but my teams is great and I don't have time to look for a job right now.
I hate this so much. These tools are getting better, so often you realise only half way through that you are reading AI text. Then you have to flush your brain and take a mental note, to never visit that site again.
There's a line in one of Douglas Adams' books where he says something along the lines that things like VCRs were invented to watch TV programs so that you don't have to.
Who would have thought that one man's joke would become a reality?
"worthless" huh, not everyone can afford inference of a ~500gb models, depending on the the speed/rate you need you might definitely go for smaller model
But maybe your sentence was more about "after BigScience model, open-sourcing anything smaller than that will be useless" which isn't necessarily true either, because there is still room to improve parameter efficiency, i.e. smaller models with comparabale performances
It kinda seems like a model trained on multiple languages would to some extent be better at English than a model trained only on English? I mean so much of English comes from other languages, and understanding language as a concept transcends any specific language. Of course there are limits and it needs good English vocabulary and understanding, but I feel the extra languages would help rather than hinder English performance.
My guess is they're mostly vanity projects for large tech companies. While the models have some value, they also serve as interesting research projects and help them attract ML talent to work on more profitable models like ad-targeting.
They did not publish benchmarks about quality of the models, which is very suspicious.
I personally squinted hard when they said removing dropout improves training speed (which is in iterations per second), but said nothing about how it affects the performance (rate of mistakes in inference) of the trained model.
I agree that the lack of benchmarks makes it hard to determine how valuable this model is. But on the topic of dropout, dropout has been dropped for the pretraining stage of several other large models. Off the top of my head: GPT-J-6B, GPT-NeoX-20B, and T5-1.1/LM.
An equally plausible frame is that once a technology becomes replicated across several companies, it makes sense to open source it since the marginal competitive advantage are the possible resultant external network effects.
I don't know if that's the right way to think about the open sourcing of large language models. I just think we really can't read too much into such releases regarding their motivation.
From what I've seen, using these huge models for inference at any kind of scale is expensive enough that it's difficult to find a business case that justifies the compute cost.
Those models aren't trained with the objective of being deployed in production.
They are trained to be used as teachers during distillation into smaller models that fit the cost/latency requirements for whatever scenario those big companies have. That's where the real value is.
I know from practice that it takes a really really long time to train even a small nn (thousands of params) , so you'll need a lot more hardware to train one with billions... But, it's expensive to buy the hardware, not necessarily to use it. If you, for some reason, have a few hundred GPU lying around, it might be "cheap" to do the necessary training.
Now, that's not your point - cost != price. But, still...
I can't think of anyone having a few hundred GPUs around unless:
- They were into Ethereum mining and quit.
- They've already built a cluster with them (e.g. in an academic setting).
- They live in a datacenter.
- They are a total psychopath.
But even assuming one magically has all those GPUs available and ready to train, I don't want to calculate the power cost of it anyway. Unless one has access to free or extremely cheap electricity it would still be very expensive.
Only half-joking use case: active communities like this one on HN make sites attractive to human visitors. A new site could use bots to fake activity. Not sure it would work in the long run though.
I have to wonder if 10 years down the line, everyone will be able to run models like this on their own computers. Have to wonder what the knock-on effects of that will be, especially if the models improve drastically. With so much of our social lives being moved online, if we have the easy ability to create fake lives of fake people one has to wonder what's real and what isn't.
The bots/machine vs human reminds me of that famous experiment from the 30s in which Winthrop Kellogg[0], a comparative psychologist, and his wife decided to raise their human baby (Donald) simultaneously with a chimpanzee baby (Gua) in an effort to "humanize the ape". It was set out to last 5 years but was relatively quickly abrupted after only 9 months. The explicit reason wasn't stated only that it successfully proved the hereditary limits within the "nature vs nurture" debate of a chimpanzee, the reticent statement reads as follows:
>Gua, treated as a human child, behaved like a human child except when the structure of her body and brain prevented her. This being shown, the experiment was discontinued
There have been a lot of speculation as to other reasons of ending the experiment so prematurely. Maybe exhaustion. One thing which seemed to dawn on the parents - if one reads carefully - is that a human baby is far superior at imitating than the chimpanzee baby, frighteningly so, that they decided to abort the experiment early on in order to prevent any irreversible damage in the development to their human child which at that point had become far more similar to the chimpanzee than the chimpanzee to the human.
So, I would rephrase "the internet is dead" into "the internet becomes increasingly undead" because humans condition themselves in a far more accelerated way to behave like bots than bots are potentially able to do.
From the wrong side this could be seen as progress when in fact it's opposite progress. It sure feels like that way for a lot of of people and is a crucial reciprocal element often overlooked/underplayed (mostly in a benign effort to reduce unnecessary complexities) when analyzing human behaviour in interactions with the environment.
Case in point: recently, I've noticed that I'm getting more and more emails with the sign off "Warm regards." This is not a coincidence. It is an autosuggestion from Google. If you start signing off an email, it will automatically suggest "Warm regards." It just appears there -- probably an idea generated from an AI network. There are more and more of these algorithmic "suggestions" appearing every day, in more and more contexts. This is true for many text messaging programs: There are "common" replies suggested. How often do people just click on one of the suggested replies, as opposed to writing their own? These suggestions push us into conforming to the expectations of the algorithm, which then reinforces those expectations, creating a cycle of further pushing us into the language use patterns generated by software -- as opposed to idiosyncratic language created by a human mind.
In other words, people are already behaving like bots; and we're building more and more software to encourage such behavior.
Those suggestions appear in Google chat too and even if you don't click on them, the simple fact of reading the suggestion makes you much more likely to type it yourself. There's clearly a priming effect to it.
Over my career I've worked with:
- Doctors
- Lawyers
- Engineers
- Fund managers
- Academics (hard and soft sciences)
- Mentalists/Hypnotists
All of them believed that they're specific training and temperament made them immune from simple persuasion techniques and that they were purely rational actors.
None of them struck me as any more rational/more independent thinkers than anyone else off the street
It is typical to rate yourself above your actual self.
Even when someone rates oneself down like when saying of themself that they're dumb, ugly or whatever, they generally mean it in a lesser fashion than for any other peer they'd attribute as such.
But its not above, its ascribung a mythical ability that does not exist - we don't talk about people who think they are psycic as optimistic, we call them crazy.
these guys are similar, except it's common belief.
Which is why it's important for folks to start applying AI to more interesting (but harder, more nuanced) problems. Instead of making it easier for people to write emails, or targeting ads, it should be used to help doctors, surgeons and scientists.
The problem is that these problems are less profitable. And that the companies with enough compute to train these types of models are concerned about getting more eyeballs, not making the world a better place.
The problem is not that those problems are less profitable. The problem is a combination of
1. Those problems are much harder
2. The potential harm from getting them wrong is much larger
Yup, I definitely agree that they're harder (and noted this). But I'm not sure I agree with your second point. Or rather, I think there's some nuance to it.
Sure, using AI to treat people without a human in the loop would clearly do harm. But using AI as an assistant, to help a doctor make the right diagnosis, seems like it'd do the opposite. It'd help doctors serve a larger patient population, make less mistakes, and probably equate to less harm in the long run.
Anyway, I think we can all agree that using AI for anything other than ad targeting is a net win.
I don’t know if this is how it still works, but early attempts were modeled as classification problems with hundreds of hand picked completions. Can’t predict something really bad if it isn’t in your prediction list. This limits the surface of bad things to cases of tone mismatch like “sounds great” when talking about someone grieving a loss or something.
Doesn't GMail collect the data in some form of federated learning nowadays, like GBoard does? Federated learning does seem to be able to create the unintended positive feedback loop, converging on a single phrase and causing the users to lock themselves in a bubble.
Actors attempt to imitate humans. “Good acting” is convincing; the audience believes the actor is giving a reasonable response to the portrayed situation.
But the audience is also trying to imitate the actors to some degree. Like you point out, humans imitate. For some subset of the population, I’d imagine the majority of social situations they are exposed to, and the responses to situations they observe, are portrayed by actors.
At what point are actors defining the social responses that they then try to imitate? In other words, at what point does acting beget acting and how much of our daily social interactions actually are driven by actors? And is this world of actors creating artificial social responses substantially different than bots doing the same?
This is a common phenomena where the fake is more believable than the real thing due to over exposure of the imitation.
Famously the bald eagle sounds nothing like it does in tv and the movies and explosions are rarely massive fireballs. For human interaction it’s much harder to pin down cause and effect but if it happens in other cases it would be very surprising to not happen there.
Someone wrote once about how Wall Street people started behaving like the slick image projected of them in movies in the 80s, namely of Michael Douglas; before that they were more like the "boring accountant" type.
It's the commonly believed reason; the child starting to take on habits from Gua, like noises when she wanted something, and the way monkeys scratch themselves. No authoritative source for it though, it's what I've been told during a lecture back in college, and I think PlainlyDifficult mentions it too in their video about it.
Nice post! But to me your analogy does not really stand : bots are the ones catching up with human conversation in an "accelerated way", feeding on a corpus that predates them. Bots are not an invariant nature that netizens imitate.
I sincerely regret that I had only one upvote to give you. This shit is so insidious that IMO everyone should just simply stop doing it until they've thought it through a lot more.
> ...humans condition themselves in a far more accelerated way to behave like bots than bots are potentially able to do.
Than bots can condition themselves to behave like humans, I presume. They can already behave exactly like bots. :-)
My mind is blown. Thanks for sharing. Especially with the movie analogy. I’m a very movie person and I imitate my personality traits a lot based on characters on movies…
thanks for the awesome analogy, I always had the sinking feeling that the bots are finding it increasingly easy to fit in among the humans because the humans on social media act increasingly like bots.
That's definitely the future, personalized entertainment and social interactions will be big. I could watch a movie made for me, and discuss it with a bunch of chat bots. The future will be bubbly as hell, people will be decaying in their safe places as the hellscape rages on outside.
We're a long, long way from this. Stringing words/images together into a coherent sequence is arguably the easy bit of creating novels/films, and computers still lag a long way behind humans in this regard.
Structuring a narrative is a harder, subtler step. Our most advanced ML solutions are improving rapidly, but often struggle with coherence over a single paragraph; they're not going to be doing satisfying foreshadowing and emotional beats for a while.
You jest, but it really is the case. When your movie has a goddamn board of directors, you can be 100% sure it will be A/B tested until it transmutes the surrounding air into gold.
Maybe. But I think a lot of folks have a short term memory; it was not so long ago that Word2Vec and AlexNet were SOTA. Remember when the thought of a human besting a world-class player at Go was impossible? Me too.
We've come ludicrously far since then. That progress doesn't guarantee that innovation in the space will continue at its current pace, but it sure does feel like it's possible.
I actually wouldn't be surprised if the technology catches up to this faster than we realize. I think the actual barrier to large scale adoption of it will be financial and social incentives.
A big reason all the major studios are moving to big franchises is that the real money is in licensing the merch. The movies and TV shows are really just there to sell more merch. Maybe this will work when we all have high quality 3d printers at our desks and we can just print the merch they sell us.
The other big barrier is social. A lot of what people watch, they watch because it was recommended to them by friends or colleagues, and they want to talk about what other people are talking about. I'm sure that there will be many people who will get really into watching custom movies and discussing those movies with chatbots, but I bet most people will still want to socialize and discuss the movies they watch with other humans. FOMO is an underestimated driver of media consumption.
We’re probably 18 months away from this. We’re probably less than 5 years away from being able to do this on local hardware. AI/ML is advancing faster than most people realise.
We're probably a long way away from narrative, but dall-e for video is probably only a year or two away from now (they're probably training the model as we speak).
I get the feeling that creative sci fi used to kind of help inoculate us against these kinds of future but it seems like there's much less of it than there used to be.
"Black mirror" was good but it's not nearly enough.
The Machine Stops is eerily prescient - or perhaps just keenly observant of trends visible even at the time - but in fairness the humans in it are not socially isolated, as such; they do not converse with bots, but rather with each other. The primary social activity in the The Machine Stops is the Zoom meeting.
I do not look forward to the day when that story becomes an optimistic view of the future.
...what? 60 thousand dollars for a dedicated computer that you can't use is not everyone, not on their own computers, and is also a crazy large amount of money for nearly everyone. Sure there are some that could, but that's not what I said.
Most of the cost of a phone isn’t the processor, so probably closer to x1000. Hardware may get that much cheaper, but it was never guaranteed, and we’re not making progress as fast as we used to.
Moore's law didn't stop, just Dennard scaling. Expect graphics and AI to continue to improve radically in performance/price, while more ordinary workloads see only modest improvements.
GPU TDP seems on the verge of going exponential, cost per transistor isn't really decreasing so much at the very latest nodes, and even that article seems to suggest it'd likely be decades before 300x flops/$
Plus, that's the energy costs involved when running a computer now worth 60k, I'm pretty sure that in the current socio-economic climate those power costs will surpass the initial acquisition cost (those 60k, that is) pretty easily.
I wanted to add that I was writing it metaphorically in a way, as in, seeing as how high those energy bills will be they might as well all add up to 60k.
Not sure about most of the people in here, but I would get really nervous at the thought of running something that eats up 3x300 watts per hour, for 24/7, just as part of a personal/hobby project. The incoming power bills would be too high, you have to be in the wage-percentile for which dropping 60k on a machine just to carry out some hobby project is ok, i.e. you’d have to be “high-ish” middle-class at least.
The recent increases in consumer power prices are a heavy blow for most of the middle-class around Europe (not sure about how things are in the States), so a project like this one is just a no-go for most of middle-class European programmers/computer people.
At full power 3 of those would cost me ~$3.50 per day ($0.15 per kWh is what I paid for last month's electricity, though I could pay less if I made some difference choices), I occasionally have a more expensive coffee order, or have a cocktail worth three times as much.
Things are getting more expensive here but nothing like the situation in Europe (essentially none of our energy was imported from Russia, historically ~10% of oil imports but that was mostly to refine and re-export, we have all the natural gas locally that we need) The US crossed the line into being a net hyrdocarbon energy exporter a while ago (unsure what the case is recently but it is at worst about at parity)
Eh, 60k is just a bit more expensive than your average car, and lots of people have cars, and that's just how things are today. I imagine capabilities will be skyrocketing and prices will fall drastically at the same time.
You could just run this on a desktop CPU, there's nothing stopping you in principle, you just need enough RAM. A big memory (256GB) machine is definitely doable at home. It's going to cost 1-2k on the DIMMs alone, less if you use 8x32GB, but that'll come down. You could definitely do it for less than $5k all in.
Inference latency is a lot higher in relative terms, but even for things like image processing running a CNN on a CPU isn't particularly bad if you're experimenting, or even for low load production work.
But for really transient loads you're better off just renting seconds-minutes on a VM.
There isn't any reason you can't run a neural net on a CPU. It's still just a bunch of big matrix operations. The advantage of the GPU is it's a lot faster, but "a lot" might be 1 second versus 10 seconds, and for some applications 10 seconds of inference latency is just fine (I have no idea how long this model would take). All the major ML libraries will operate in CPU-only mode if you request it.
I believe you're confusing the amount of A100 graphics cards used to train the model (the cluster was actually made up of 800 A100s), and the amount you need to run the model :
> The model [...] is supposed to run on multiple GPUs with tensor parallelism.
> It was tested on 4 (A100 80g) and 8 (V100 32g) GPUs, [but should work] with ≈200GB of GPU memory.
I don't know what the price of a V100 is, but given $10k a piece for A100s we would be closer to the $60k estimate.
The $10k price is for an A100 with 40GB ram, so you need 8 of those. If you can get your hands on the 80GB variant, 4 are enough.
Also, if you want to have a machine with eight of these cards, it will need to be a pretty high-spec rack-mounted or large tower. To feed these GPUs, you will want to have a decent amount of PCIe-4 lanes, meaning EPYC are the logical choice. So that's $20k for an AMD EPYC server with at least 1.6kw PSUs etc etc.
You don't need a "decent amount" of PCIe-4 lanes. You just need 16 of them. And they can be PCIe 3.0 and will work just fine. Deep learning compute boxes predominantly use a PCIe switch. e.g. the ASUS 8000 box, which handles eight cards just fine. You only need a metric tonne of PCIe bandwidth if you are constantly shuttling data in and out of the GPU, e.g. in a game or exceedinyl large training sets of computer vision data. A little latency of a few hundred milliseconds moving data to your GPU in a training session that will take hours if not days to complete is neither here nor then. I suspect this model, with a little tweaking, will run just fine on an eight way RTX A5000 setup, or a five-way A6000 completely unhindered. That puts the price around $20,000 to $30,000. If I put two more A5000s in my machine, I suspect I could figure out how to get the model to load.
It also sounds like they haven't optimized their model, or done any split on it, but if they did, I suspect they could load it up and have it infer slower on fewer GPUs, by using main memory.
Which will work just fine with NVIDIA SWITCH and a decent GPU compute case from ASUS or IBM or even building your own out of an off-the-shelf PCIe switch and consumer motherboard.
And also, NVIDIA does not sell them to the consumer market whatsoever. Linus Tech Tips could only show one because someone in the audience sent theirs over for review.
You're grossly overestimating. People who make 60k annually are getting a bit rarer nowadays, it's not like everyone can afford it. For the majority of people it'd be a multi-decade project, for a few it might only take 7 years, very few people could buy it all at once.
Unpopular opinion: something will stop egalitarian power for the masses. I had high hopes for multicore computing in the late 90s and early 2000s but it got blocked every step of the way by everyone doubling down on DSP (glorified vertex buffer) approaches on video cards, leaving us with the contrived dichotomy we see today between CPU and GPU.
Whatever we think will happen will not happen. A less-inspired known-good state will take its place, creating another status quo. Which will funnel us into dystopian futures. I'm just going off my own observations and life experience of the last 20 years, and the way that people in leadership positions keep letting the rest of us down after they make it.
In what sense is the dichotomy between CPU and GPU contrived? Those are designed around fundamentally different use cases. For low power devices you can get CPU and GPU integrated into a single SOC.
That's a good question. I wish I could answer it succinctly.
For me, the issue is that use cases and power usage are secondary to the fundamental science of computation. So it's fine to have matrix-processing stuff like OpenGL and TensorFlow, but those should be built on general-purpose hardware or else we end up with the cookie cutter solutions we have today. Want to run a giant artificial life simulation with genetic algorithms? Sorry, you can't do that on a GPU. And it turns out that most of the next-gen stuff I'm interested in just can't be done on a GPU.
There was a lot of progress on transputers and clusters (the old Beowulf cluster jokes) in the 80s and 90s. But researchers came up against memory latency issues (Amdahl's law) and began to abandon those approaches after video cards like the 3dfx Voodoo arrived around 1997.
But there are countless other ways to implement concurrency and parallelism. If you think of all the techniques as a galaxy, then GPUs are way out at the very end of one spiral arm. We've been out on that arm for 25 years. And while video games have gotten faster (at enormous personal effort by millions of people), we've missed out on the low hanging fruit that's possible on the other arms.
For example, code can be auto-parallelized without intrinsics. It can be statically analyzed to detect contexts which don't affect others, and the instructions in those local contexts could be internally spread over many cores. Like what happens in shaders.
But IMHO the greatest travesty of the modern era is that those innovations happened (poorly) in GPUs instead of CPUs. We should be able to go to the system menu and get info on our computer and see something like 1024+ cores running at 3 GHz. We should be able to use languages like Clojure and Erlang and Go and MATLAB and even C++ that auto-parallelize to that many cores. So embarrassingly parallel stuff like affine rasterization and blitters would run in a few cycles with ordinary for-loops instead of needing loops that are unrolled by hand or whatever other tedium that distracts developers from getting real work done. Like, why do we need a completely different paradigm for shaders outside of our usual C/C++/C# workflow, where we can't access system APIs or even the memory in our main code directly? That's nonsense.
And I don't say that lightly. My words are imperfect, but I do have a computer engineering degree. I know what I'm talking about, down to a very low level. Wherever I look, I just see so much unnecessary effort where humans tailor themselves to match the whims of the hardware, which is an anti-pattern at least as bad as repeating yourself. Unfortunately, the more I talk about this, the more I come off as some kind of crackpot as the world keeps rushing headlong out on the GPU spiral arm without knowing there's no there there at the end of it.
My point is that for all the progress in AI and rendering and simulation, we could have had that 20 years ago for a tiny fraction of the effort with more inspired architecture choices. The complexity and gatekeeping we see today are artifacts of those unfortunate decisions.
I dream of a day when we can devote a paltry few billion transistors on a small $100 CPU to 1000+ cores. Instead we have stuff like the Cerebras CS-2 with a trillion transistors for many thousands of dollars, which is cool and everything, but is ultimately gatekeeping that will keep today's Anakin from building C-3PO.
Before any of the things you describe happen, most states will mandate the equivalent of a carry permit to be able to freely use compute for undeclared and/or unapproved purposes.
If by running models you mean just the inference phase, then even today you can run large family of ML models on commodity hardware (with some elbow grease, of course). The training phase is generally the one not easily replicated by non-corporations.
I know it's a sort of exaggerated paranoid thought. But like these things do all come down to scale and some areas of the world definitely could have the amount of compute available to make dall-e level quality full scale videos which we might be consuming right now. It really does make you start to wonder at what point we will rationally be able to have zero trust that not everything we watch online is fabricated.
Historically, hard-to-falsify documents are an anomaly, the norm was mostly socially conditional and enforced trust. Civilizations leaned and still lean on limited-trust technologies like personal connections, word of mouth, word on paper, signatures, seals, careful custody etc. I agree losing cheap trust can be a setback, just want to point out we’re adaptable.
I'm predicting that the upcoming Mac Pro will be very popular among ML developers, thanks to unified memory.
It should be able to fit the entire model in memory.
Combine that with the fact that PyTorch recently added support for Apple silicon GPUs.
Although memory capacity may matter more than speed for inference. As long as you're not training or fine tuning, the mac pro / studio may be just fine.
apart from the fact that you can't use any of the many nvidia-specific things; if you're dependent on cuda, nvcuvid, AMP or other things that's a hard no.
Comments like this make me feel like I'm losing my mind.
I think it's far more likely that in 10 years we'll all become more used to rolling blackouts, and fondly remember we all used to be able to afford to eat out, and laugh over a glass of cheap gin about how wild things were back in the old days before things got really bad.
10 years ago was a much more exciting and hopeful time than today. I remember watching Hinton show off what deep learning was just starting to do. It was frankly more interesting that high parameter language models. Startups were all working on some cool problems rather than just trying to screw over customers.
That's just technology. Economically, socially and ecologically things looks far brighter in 2012 than they do now, and in 2032 I suspect we'll feel the same about today, but far more dramatically.
We've already pass the peak of "things are getting better all the time!" but people are just in denial about this.
You're not alone - especially based on observations during the pandemic, it seem we are woefully unaware and unprepared for how fragile the structures supporting our current way of living are, and how easy they would collapse into a much worse state of living conditions when it comes to power and food, let alone luxuries and internet as we know it...
It also seems to me that most people would not be ready to give up more than 10% of their luxuries / way of living up-front in order to protect those structures and would continue to watch funny TikTok videos and post IG photos until the very moment their internet access goes out and doesn't come back.
I don’t know for you, but most of my online interactions are text based. Context of interpretation matter far much than the form of the content. If you know it’s easy to fake text exchanges, you might be more careful about text origin, and other contextual hints. Even it’s the syntax imitate your children verbal oddities, you may not necessarily run to comply thoughtlessly to an unusual demand you just receive by SMS from their phone number. Trust and check.
>> I have to wonder if 10 years down the line, everyone will be able to run models like this on their own computers.
Do you mean train or run? My assumption was all these models could be run on most computers, probably with a simple docker container, as long as there is sufficient RAM to hold the network, which should be most laptops > 16gb ram.
Speaking of which, anyone have recommendations on pre-trained docker containers with weights included?
I'm not sure why you got downvoted. Yes, ASICs (either analog or digital) that have some model hardcoded in would probably make it feasible, but it won't be programmable which is the interesting part.
It's more likely, if not inevitable that these things will become ubiquitously available remotely, like Siri and Alexa. It's access that's important, not hosting.
Seeing those gigantic models it makes me sad that even the 4090 is supposed to stay at 24GB of RAM max. I really would like to be able to run/experiment on larger models at home.
It's also a power issue. The 4090 sounds like you're going to need a much, MUCH higher PSU than you currently use.. or it'll suddenly turn off as it uses 2-3x the power.
You'll need your own wiring to run your PC soon :-)
I think it is a stupid question, but does the power consumption needed by processors to infer compared to human brains demonstrate that there is something fundamentally wrong for the AI approach or is it more physics related?
I am not a physicist or biologist or anything like that so my intuition is probably completely wrong but it seems to me that for more basic inference operations (lets say add two numbers) power consumption from a processor and a brain is not that different. It’s like seeing how expensive it is for computers to infer for any NLP model, humans should be continuously eating carbs just to talk.
Around room temperature, an ideal silicon transistor has a 60 mV/decade subthreshold swing, which (roughly speaking) means that a 10-fold increase in current requires at least a 60 mV increase in gate potential. There are some techniques (e.g. tunneling) that can allow you to get a bit below this, but it's a fairly fundamental limitation of transistors' efficiency.
[It's been quite a while since I studied this stuff, so I can't recall whether 60 mV/decade is a constant for silicon specifically or all semiconductors.]
> but it seems to me that for more basic inference operations (lets say add two numbers) power consumption from a processor and a brain is not that different
Sure it is - it is too hard to figure it out based on 2 numbers number, but lets multiply that by a billion - how much energy does it take a computer to add two billion numbers? Far less than the energy it would take a human brain to add them.
Nvidia deliberately keeps their consumer/gamer cards limited in memory. If you have a use for more RAM, they want you to buy their workstation offerings like RTX A6000 which has 48G DDR6 RAM or A100 which has 80G.
What NVIDIA predominantly does on their consumer cards is limit the RAM sharing, not the RAM itself. The inability for each GPU to share RAM is the limiting factor. It is why I have RTX A5000 GPUs and not RTX 3090 GPUs.
It only gets expensive if you insist on sourcing it from enterprise vendors. The first 256GB I paid $2,400 for. The second 256GB I paid $1,200 a little over a year later. And the third 256GB I paid $800 about seven months later. I've got a workstation with 768GB DDR4 and I am considering upping that to 1.5TB if the prices on the 256GB sticks will come down.
What's the difference between Apple's unified memory and the shared memory pool Intel and AMD integrated GPUs have had for years?
In theory you could probably assign a powerful enough iGPU a few hundred gigabytes of memory already, but just like Apple Silicon the integrated GPU isn't exactly very powerful. The difference between the M1 iGPU and the AMD 5700G is less than 10% and a loaded out system should theoretically be tweakable to dedicate hundreds of gigabytes of VRAM to it.
It's just a waste of space. An RTX3090 is 6 to 7 times faster than even the M1, and the promised performance increase of about 35% for the M2 will means nothing when the 4090 will be released this year.
I think there are better solutions for this. Leveraging the high throughput of PCIe 5 and resizable BAR support might be used to quickly swap out banks of GPU memory, for example, at a performance decrease.
One big problem with this is that GPU manufacturers have incentive to not implement ways for consumers GPUs to compete with their datacenter products. If a 3080 with some memory tricks can approach an A800 well enough, Nvidia might let a lot of profit slip through their hands and they can't have that.
Maybe Apple's tensor chip will be able to provide a performance boost here, but it's stuck on working with macOS and the implementations all seem proprietary so I don't think cross platform researchers will really care about using it. You're restricted by Apple's memory limitations anyway, it's not like you can upgrade their hardware.
I downloaded the weights and made a .torrent file (also a magnet link, see raw README.md). Can somebody else who downloaded the files as well doublecheck the checksums?
For those of us without 200GB of GPU RAM available... How possible is it to do inference loading it from SSD?
Would you have to scan through all 200GB of data once per character generated? That doesn't actually sound too painful - 1 minute per character seems kinda okay.
And I guess you can easily do lots of data parallelism, so you can get 1 minute per character on lots of inputs and outputs at the same time.
These models are not character-based, but token-based. The problem with CPU inference is the need for random access to 250 GiB of parameters, meaning immense paging and orders of magnitude slower than normal CPU operation.
I wonder how bad it comes out with something like Optane?
It's not really random access. I bet the graph can be pipelined such that you can keep a "horizontal cross-section" of the graph in memory all the time, and you scan through the parameters from top to bottom in the graph.
Fair point, but you’ll still be bounded by disk read speed on an SSD. The access pattern itself matters less than the read cache being << the parameter set size.
You can read bits at that rate yes, but keep in mind that it’s 250 GiB /parameters/, and matrix-matrix multiplication is typically somewhere between quadratic and cubic in complexity. Then you get to wait for the page out of your intermediate result etc etc.
It’s difficult to estimate how slow it would be, but I’m guessing unusably slow.
That's pretty much what SLIDE [0] does. The driver was achieving performance parity with GPUs for CPU training, but presumably the same could apply to running inference on models too large to load into consumer GPU memory.
If you bother to set the permissions, I suggest to do it in a way that doesn't leave a time window during which it still is unprotected (note that non-priviledged processes just need to open the file during that window; they can keep reading even after your chmod has been run). Also, not sure what the point of `-U clear` was, that's setting the uuid for the swap, better leave it at the default random one?
Is there a reason why it is required to fill the swapfile with zeroes here? Normally you'd see something like "dd of=/swapfile bs=1G seek=3 count=0", creating a file of size 3G but with no space allocated (yet). It's much quicker to complete the setup this way.
I assume if you force the file system to allocate inodes you are likely to have a less fragmented file than if you create a sparse file that gets inodes assigned over time when each part is used.
On all the benchmarks of SSDs I've seen they perform 1.5 to 4 times better on sequential reads than on random reads. That's a much better ratio than HDDs, but still enough to care about it.
You're also likely to get less write amplification if your swap file is continuous.
Of course with all the layers of indirection it's a numbers game, you don't know if your file system allocates adjacent inodes, and you don't know how your SSD will remap the blocks. But all else being equal, trying to make the file as sequential as possible seems preferable.
But this does make me wonder if there's any way to allow a graphics card to use regular RAM in a fast way? AFAIK built-in GPU's inside CPU's can but those GPU's are not powerful enough
Unified memory exists, but it's not a magic bullet. If a page is accessed that doesn't reside on device memory (i.e. on the GPU), a memcpy is issued to fetch the page from main RAM. While the programming model is nicer, it doesn't fundamentally change the fact that you need to constantly swap data out to main RAM and while not as bad as loading it from the SSD or HDD, that's still quite slow.
Integrated GPUs that use a portion of system memory are an exception to this and do not require memcpys when using unified memory. However, I'm not aware of any powerful iGPUs from Nvidia these days.
Sure. Makes sense. So I guess for discrete GPUs the unified memory stuff provides a universal address space but merely abstracts the copying/streaming of the data.
There does seem to be a zero copy concept as well and I've certainly used direct memory access over pcie before on other proprietary devices.
It's just crazy how much it costs to train such models. As I undestand 800 A100 cards would cost about 25.000.000 without considering the energy costs for 61 days of training.
Still a bit to expensive for my sideproject ; ) To be honest it seems only big corp can do that kind of stuff. By the way if try to do hyper parameter tuning or some exploration in the architecture it becomes guess 10x or 100x more expensive.
AWS has them in US-EAST1 for $9.83/hr spot with 96 CPU cores, 1152GB of ram, 8 A100s with 320 GB of RAM, 8TB of NVME, and 19 Gbps of EBS bandwidth to load your data quickly.
Side note: Yandex search is awesome, and I really hope they stay alive forever. It's the only functional image search nowadays, after our Google overlords neutered their own product out of fear over lawyers/regulation and a disdain for power users.
You can't even search for images "before:date" in Google anymore.
Yandex Image Search is today is what Google Image Search should have been.
End of the day I’ll use what actually gets the job done.
Same goes for OpenAI and Google AI. If you don’t actually ever release and let others use your stuff and end paralyzed in fear at what your models may do then someone else is gonna release the same tech, and at this rate it seems like that’ll be Chinese or Russian companies who don’t share your sensibilities at all, and their models will be the ones that end up productized.
IMO the main reason these companies don't release their models is not ethical concerns but money:
- NVIDIA sells GPUs and interconnect needed for training large models. Releasing a pretrained LM would hurt sales, while only publishing a teaser paper boosts them.
- Google, Microsoft, and Amazon offer ML-as-a-service and TPU/GPU hardware as a part of their cloud computing platforms. Russian and Chinese companies also have their clouds, but they have low global market share and aren't cost-efficient, so nobody would use them to train large LMs anyway.
- OpenAI are selling their models as an API with a huge markup over inference costs; they are also largely sponsored by the aforementioned companies, further aligning their interests with them.
Companies that release large models are simply those who have nothing to lose by doing so. Unfortunately, you need a lot of idle hardware to train them, and companies that have it tend to also launch a public cloud with it, so there is a perpetual conflict of interests here.
Their "moral" reasoning behind not publishing models is simply laughtable because they do sell API access to them to anyone who can pay. And "bad guys" generally have money.
They can (and do) revoke API access from bad guys. They can't do that to downloaded models. Look, I don't like what OpenAI does, but "API access, but no model download" makes sense if you are worried about misuses.
Every company out there says it will "revoke API access for misuse", but do they have transparency reports? Who do they even consider bad guys and what do they consider as misuse?
I would be totally on their side if their reasoning was that they dont publish models to compete with FAANG more efficiently and get more income for their research, but this moral reasoning just sounds completely fake because bad actors do have funding to train their own models.
Examples of "real cases of misuse encountered in the wild" include "spam promotions for dubious medical products and roleplaying of racist fantasies".
Yes, some bad actors can train their own models, but OpenAI can't do much about that either way. It is doubtful whether spam promoters of dubious medical products can, at least for a while.
It would be better for misuse to be criminalized and taken care of by national governments, rather than leave it to for-profit companies to decide what is or isn't "misuse".
Personally, I think using AI to manufacture advertisements on demand is misuse... but will Google agree with me?
Bad actors still can get access to such models.
It even makes them more dangerous than it would if everyone had access to them.
Here's an alternative: progressively release better and better models (like 3B params, 10B, 50B, 100B) and let people figure out the best way to fight against bad actors using them.
He was an author of fiction. They usually write a lot of stuff that is on some deeper level "true", even though it is on the surface fictional... And a lot of stuff that isn't.
Or sometimes both, because it's only part of the truth. Maybe the complete version should be: "An armed society is a 'polite' society, but with very frequent killings and not-infrequent massacres." I think I prefer living in a less "polite" society.
when you release a project into the wild under a permissive license, aren't you essentially washing yourself from any "liability" ?
> MIT
" IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE."
don't commercial licenses have same/similar wording so what liability are you talking about?
So they do that because they're doing it for free. Otherwise they couldn't be generous with their work--those licenses are about permitting the generous intent.
The “ethical concerns” thing is just a progressive-sounding excuse for why they’re not going to give their models away for free. I guarantee you those models are going to be integrated into various Google products in some form or another.
> Google overlords neutered their own product out of fear over lawyers/regulation
What kind of lawyers/regulation do you have in mind? If anything, I'd find the opposite: lawyers and copyright holders should be grateful for such a tool that - when it was still working - allowed you to trace websites using your images illegally.
Now they all use Yandex for this purpose, with relatively good results.
Oh I see. What I'm looking for is the reason why they broke the reverse image search. It was working well many years ago but some time after that they switched it to some strange image classifier (I upload an image of an apple to find exactly the same image to track its license of origin, and it says "possibly an image of an apple" - oh thank you Google I didn't know that.)
> Tineye works reasonably well, for finding exactly the same image (including different resolutions, crops, etc.)
Tineye is definitely better than Google with crops, etc. Google reverse image search seems to have more data, but it seems much less able to recognize even basic modifications to the input.
They used to have in their AI ethics department some of the most anti-AI progressives. They picked on everything - biased training data, discriminatory usage, consuming too much energy to train, models are just stochastic parrots, etc. while forgetting to mention any effort to mitigate the problems (of course these are real concerns and being under intense research) Now these critics are fired, but Google must have learned to fear them.
If they let everyone use the latest models, critics could uncover ugly biases in 10 minutes. Then Google would have to do damage control. These models are very suggestible. You can induce them to make fools of themselves.
IIRC it was mostly from groups like Getty images. They and other image licensing companies didn't want google showing their images in search results. They claimed it was copyright infringement and given the absolute state of IP law in the US they could have made Google's life very difficult.
We're talking about reverse search, right? (Because "normal" image search still kind of works, it's reverse search that is completely broken.) In this case, you already have the copyrighted image, and if you find out that the same image is on Getty Images, then all the better as you can check it license. Also, it's better for GI as it gives them more exposure, and the kind of companies who use GI are very unlikely to pirate images.
Speaking of which... I built a gaming PC a few years ago but I never use it these days. I want to install Linux on it and start playing around with machine learning.
Can anyone recommend any open source machine learning project that would be a good starting point? I want one that does something interesting (whether using text, images, whatever), but simple/efficient enough to run on a gaming PC and see some kind of results in hours, not months. I'm not sure what I want to do with ML yet, I just know I'm interested, and getting something up and running is likely to enthuse me to start playing and researching further.
My spec is: GeForce RTX 2080 Ti (11GB), a 24-core AMD Ryzen Threadripper, and 128GB RAM. I'd be willing to spend on a new graphics card if it would make all the difference. I am a competent coder and familiar with Python but my experience with ML is limited to fawning over things on HN. Any recommendations gratefully received!
If your disk has enough space to store the model, I think in theory you could run them, using the disk to store states. But it will be slow. I'm not sure how slow though, and also if anyone has implemented this. It actually should not be too difficult.
Disk makes no sense considering RAM is pretty cheap. But even then RAM is way too slow (and the communication overhead way too high). You probably get like a 100x slowdown or more.
I think you are overestimating compute and I/O for this model. If you assume it is RAM bandwidth bound, with a single channel top DDR4 you will get inference time as a low multiple of 7 seconds (200GB/25GBs). In a workstation you can have 8 channels.
12-channels in mine. 24-channels on some configurations, though I think that is the upper limit at this time, with a maximum density of 512GB per channel.
> It was tested on 4 (A100 80g) and 8 (V100 32g) GPUs, but is able to work with different configurations with ≈200GB of GPU memory in total which divide weight dimensions correctly (e.g. 16, 64, 128).
so we looking at crazy prices just for inference. RIP to the first guy's cloud billing account who makes this public
With DeepSpeed an SSD and lots of RAM you should be able to run inference with a 8GB card. There’s a thread somewhere on https://discuss.huggingface.co/ doing the math around this
EDIT: It seems fine if you download with a browser useragent not CURL... I guess I just got hit by some anti-bot thing they have accidentally have turned on.
I am one of the people who worked on Google's PaLM model.
Having skimmed the GitHub readme and medium article, this announcement seems to be very focused on the number of parameters and engineering challenges scaling the model, but it does not contain any details about the model, training (learning rate schedules, etc.), or data composition.
It is great that more models are getting released publicly, but I would not get excited about it before some evaluations have been published. Having a lot of parameters should not be a goal in and of itself. For all we know this model is not well trained and worse than Eleuther AI's 20B parameter model, while also being inconveniently large.
1. The OP did not criticize the headline; they criticized the content. If you read the article that you linked, you would find that they do, in fact, evaluate the performance of the model.
2. 540 billion parameters is notable for its size, which is likely why they lead with that particular headline.
The difference is PaLM was extensively benchmarked and it performed as well as it should, which is to say, amazingly well. The irony here is that you should instead be invoking that other ~500b model, Nvidia's Megatron-530b, which was undertrained, only cursorily evaluated (no interest in any new capabilities or even examining old ones like inner monologues) and promptly forgotten by everyone after the headlines about being the largest dense model: https://arxiv.org/abs/2201.11990#microsoftnvidia
it's in there look for this sentence. And they did some top dog stuff: Training details and best practices on acceleration and stabilizations can be found on Medium (English)
Given that Yandex is a crucial part of Russian propaganda arm, we should consider the whole range of possibilities from:
* Good. This is great researchers helping community by sharing great work. (which is what I'd like to assume before I have any proof of the contrary)
* Bad. This very expensive training has been approved by Ya leadership (which is under Western personal sanctions) because they've secretly built in RU's propaganda talking points into the model. Such as "war in Ukraine is not a war but special operation" etc.
No. read my message again. As I said, we should assume good intention first until proven otherwise.
But we should have better tools to test for biases/toxicity. Perspective API is great tool for toxicity detection. But I'm not aware of any "propoganda" detection tool.
Is there a way for developers, who do not have AI/ML background, to get started using this ? I have been curious about GPT-3 but I do not have any AI/ML experience or knowledge. Is there a "approachable" course on Coursera or Udemy that could help me get started with technologies like GPT ?
In this case, it does, because the vocab is not a list or words, but a list of tokens. Each token may be a word, but it might also be a phrase or part of a word. The tokens are generated to be optimal on the input data - ie. for a given vocab size to minimize the number of tokens to represent it.
Therefore, the size of the vocab gives a good guide to the size of the data, since if there was 10x more english language data then the optimal distribution would be to dedicate more token space to english than russian.
What are some use cases for something like this? I understand it says "generating and processing text", but is it a replacement for OCR? Or something else?
Well you can custom-build a suitable system for the middle five digits. It's still not something every idiot can run, but most medium to large companies can set up their own for sure.
no it's not. they straight up serve kremlin, promoting kremlin fake news and silencing russian opposition (not much to silence but still). they can have whatever functionality they like, I still won't use it in billion years.
They do business in other countries and for that it is best for the business to appear as neutral as possible. We don't know how much they fiddle with the search results and ranking but this still looks quite neutral to me: https://yandex.com/search/?text=russo+ukrainian+war
You're going to get downvoted, but Eric Schmidt worked regularly with the state department, and google employees were involved in spurring the color revolutions.
Julian Assange detailed this in a newsweek article before his name and body were smeared into the ground:
Oh, but they say he's not trustworthy, or that it's a conspiracy theory that he was intentionally smeared. Well, the CIA and their contractors have been doing it for over a decade, even before he was unfairly accused of helping trump:
Always the response is to smear with the same thoughtless label when no valid criticism is put forward. There is a link to an actual proposal by a CIA contractor, no tinfoil necessary.
Not yet at least, the political climate may deteriorate to that point, especially when it's about elections, given recent revelations.
Still, at least right now it looks to me - and I have visited Russia and Ukraine several times in the past and still have indirect connections (to people heavily involved in business there) - that there still is considerable more freedom from the government and its wishes for people and companies in the West.
If you publicly criticize a US politician you may get some hate messages, but at least they are from private citizens and you don't have FBI agents knocking on your door threatening you with prison. In Germany some rogue police were found to send threatening messages, but as soon as it was discovered the government acted against it. Also in Germany there even were public rallies from pro-Russian folks, now try that in Moscow with pro-Ukraine banners... Russia even bans the colors yellow and blue, even when they have nothing whatsoever to do with Ukraine and are just decorative: "Russians Strip Yellow and Blue From the Nation’s Streets Over Ukraine War" -- https://www.themoscowtimes.com/2022/04/27/in-photos-russians...
>I reject the false equivalence of the DHS and FSB. Not gonna both-sides this, sorry.
lmao mkay. Not identical, but very similar. It's not even 'Alex Jones'-tier to say this. I think you forget you are if you are under US or (even NATO). YOU WILL hear propaganda from your side, as the Russians do. It's NORMAL. We live under control of a hegemon with self-interests.
May I have to remind you of these? And tell me the difference between these and Russian spookery:
Would you please stop posting flamewar comments and using HN for ideological battle? We ban accounts that do those things, and you've already been doing it repeatedly.
Sure, my rhetotic got kinda out of line. I get excited debating.
>and using HN for ideological battle
I reject the characterization, it's almost implying I have an end with these posts besides putting a point of view that's at least an alternative to status quo that can make people realize they don't have any skin in the game, and that the US State Dept. has. I also don't. I don't really care about the outcome of the war.
You broke the site guidelines badly here. The rules apply regardless of how wrong another comment is or you feel it is. We've had to warn you about this kind of thing a lot. If you keep doing it, we're going to end up having to ban you, so please stop.
Unmasked as a shill for saying that great powers engage in propaganda, false flagging and dissent crushing.
I have no skin in the game. War, no war, it doesn't matter to me the outcome of this war to be honest.
EDIT: But if you are all moral highground, answer me this:
Why did the US goad Ukraine into taking a hostile stance against a neighbouring (and somewhat rival) great power? Whas this to the interest of Ukranians? Or to the geopolitical interests of US? https://www.youtube.com/watch?v=93eyhO8VTdg
"Why did you, W, goad X into expressing their sovereignty against Y? Didn't you know that Y would react with violence? That makes W the bad guy"
No, Y is always on the wrong side; you can't use the threat of violence and then claim via realpolitik that the other side was in the wrong. "Moral high ground" means you act out of principle, not political convenience. In this case, Ukraine didn't want to be in the Russian sphere, so we supported them.
And now yeah, the US is paying a lot of money and inconvenience to support Ukraine. Gas will be more expensive, we're spending tens of billions on weapons. But that's because it's the right thing to do; not every decision is a realpolitik game about maximizing revenue from vassal states (which I hope Russia will learn someday).
Ukraine has self interests. Everyone has. But not everyone can actualize those, due to reality. The reality is that Ukraine neighbours a powerful hegemon.
Since international relations are anarchistic (due to not being a supra-entity that has authority over states [authority!=international courts bullsh*]), Ukraine hasn't any right (to its sovereign, that does not exist) to be sovereign. It has to go out and look for itself.
Ukraine thought that had the US/NATO back, that made it act in a more reckless way (kind of when you rely on your big brother type stuff). It escalated 'till it decided it wanted to join NATO. It was goaded.
>you can't use the threat of violence and then claim via realpolitik that the other side was in the wrong.
who says? That's your problem. You lack the 'anarchistic' framework of geopolitics.
Now, realpolitik-wise, Ukraine's self-interests (of being more independent of Russia thru NATO) did clash with Russia's self-interests of being safe (and probably made Russia have a expansionary Casus Belli).
I feel that the US triggered and amplified the war, thru regime change in Ukraine (yep, maidan was a coup), recognizing aspirations of UA to NATO, making Zeleskyy too comfy to be more harsh in negotiations (where he had no leverage, cuz Ukraine's power small vs Rus.), ultimately resulted in unnecessary deaths, just for the purpose of sphere of influence expansion.
>so we supported them.
Even if it's reckless and could trigger something like this?
Also, I will play the 'reversed roles card' again. This time with a REAL example.
Cuba. Was. The. Same. Thing.
> clash with Russia's self-interests of being safe
They clash with Russia's perceived self-interests of being safe, yes. The problem is, Russia defines "being safe" the same way it always has, under the General Secretaries of the Central Committee of the Communist Party of the Soviet Union and all the (other) Tsars before, going right back to when the Grand Duchy of Muscovy emerged from vassalage to the Mongols: by distance of their borders from Moscow. And the distance they want is at least up to Warsaw, Vienna and Sofia, but preferably Berlin or Paris (or, better yet, Lisbon).
That kind of clashes with the current world order, where there are quite a lot of currently sovereign nations in the way, which would have to be subordinated to Moscow – or basically just wiped off the map – to give Russia what its leadership wants.
What you're advocating is in effect that this is how it should be, because Russia is "a great power". (Newsflash: So were Germany and Japan in 1939. And, to compare with Russia's current equal in GDP, Italy.)
A more rational solution would be that Russia updates its concept of "being safe" to at least the 20th century. (Or, hey, one that worked for at least some countries even in the 19th: Don't be an asshole to anyone, then nobody will want to attack you.)
> yep, maidan was a coup
I've found that to be rhe most infallible heuristic on social media for – oh-so-coincidenctally – the last third of a year: Calls Maidan a "coup" → is a Putler-propagandist troll.
> UA ... NATO ... Zeleskyy [yadda yadda] ultimately resulted in unnecessary deaths
Oh, that's funny. And here I thought it was Putler's unilateral decision to start a war of aggression causing all those deaths.
> Why did the US goad Ukraine into taking a hostile stance against a neighbouring (and somewhat rival) great power?
You gravely misspelled "Why did the US support the sovereign nation-state Ukraine in asserting its independence against a neighbouring rogue state whose dictatorial regime has delusions of still being a 'great power'?"
But I don't know, maybe that was what you meant to write and some evil employees at one of Putin's troll factories inserted their master's propaganda into your otherwise so well-thought-out piece.
You can twist it however you like, and select individual sentences and ignore the context and everything else I wrote. At this point (again, may get worse, it does not look good IMO) Russia is at least an order of magnitude worse. You have individual cases - maybe - in the US, but in Russia it's systemic and systematic and goes all the way to murder not just of the person but of the entire family - as we saw in March, when two oligarchs and their entire family were murdered, one in Russia, one in Spain (summary of oligarch deaths: https://www.businessinsider.com/these-are-all-the-russian-ol...). On top of that various murders of "traitors" e.g. in UK and in Germany. Trump got close in rhetoric to what Putin said on Russian TV in March but did not have a chance to implement it, and the US institutions still resist and don't (yet?) follow a dictator blindly, as they do in Russia.
> Take down Russia protest vote app or go to prison
What about Canadian truckers? Didn't Trudeau call them terrorists, took their trucks, donations, bank accounts and driver licenses... There is no right to protest anywhere, don't kid yourself.
The Canadian truck protesters were allowed to shut down the center of the city, blast their horns 24 hours a day, and shut down a major international trade route. They were permitted to do this for weeks before the citizens got sick of it and demanded action from their government.
They gave protest a bad name.
Your conclusion that "There is no right to protest anywhere" is simply ridiculous.
>Your conclusion that "There is no right to protest anywhere" is simply ridiculous.
BLM rioters did this, and more. Violence + Property damage + Corporate Backing + gov backing.
They didn't had their donation money seized,and almost no resistance to establish order.
> I doubt that anything like this happend to Google execs in the US:
It seems plausible; we don't know what gets done under the FISA court but it would presumably involve companies like Google. Some suited agent of the US government turning up at Google HQ and threatening jail time under some FISA warrant if some pro-Trump something doesn't disappear off Google.
That'd be a scandal but not the worst abuse of the secret court system. It hasn't exactly covered itself with glory since inception. They already spy on basically everyone and that is a lot worse than some light censorship.
I would assume this could go unsaid, but apparently it needs to be said somewhere in this thread: there is zero comparison between the US and an autocratic dictator who attempts to kill and then jails his opposition, runs fraudulent elections, kills journalists, and invades sovereign countries. Zero. None. Zero.
>fraudulent elections: funny how the concerns that 'half' of the US had with elections were dismissed. Especially when conditions were different, by using a method usually agreed (until now, cuz narrative) prone to tampering. So much for free and fair elections.
Sure, there's probably a non-zero amount of germs in the pasteurised homogenised and sterilly-packaged milk I buy at the supermarket. So is drinking that equivalent to sucking the pus out of a punctured boil on the arse of a diseased cow?
No. There's zero comparison. None. Zero.
Learn to read, man. If nothing else, it'll make you a better propaganda troll for your Kremlin master.
Well they've made their choice and silenced our protest and opposition, and later spewed pro-war anti-Ukrainian propaganda using country's largest media (Yandex News).
If you're profiteering from our suffering and choose Kremlin's needs over ours, don't be suprised then when we tell you to shove your AI models and your search.
It's still working as usual and they announced the transition after 8 years of warmongering and blacklisting all opposition resources. And only when sanctions hit.
Now they scramble to present a whitewashed image to Western public. They will probably put themselves forward as great contributors to open source.
From context, pretty obviously "we Ukrainians". Didn't do too well in elementary reading, did you?
> I am not in this together. So change it to "I". I don't care about you lot... lmao
Thank you for so effectively demonstrating what a despicable excuse for a human being you are. I'll do my best to remember this when next I come across anything from you.
Not really, it's very different for Yandex in particular. Along with several other companies like Vimpelcom, they started the "Safe Internet League", an organization which exploited the think of the children argument to build the censorship regime from scratch. They practically created the original censorship laws, or participated in the creation, when they were in the best position to resist the government (and had the incentive to do so). As an example, Telegram successfully resisted the censorship while having much less leverage, much later.
Of course Yandex likes to pose as the victim of censorship, but the truth is that they are the censors themselves. They've been steamrolled by a runaway process they helped to create.
And someone upthread claimed their image search was so great in comparison to google... because google also censors their results. They just censor different things.
Yeah we definitely shouldn't worry about the political sympathies/vulnerabilities of the web services we use as the foundations of our shared knowledge...
There's a world of difference between Five-Eyes and being harrassed, mobbed, jailed, having a "Z" and "traitor" spray painted on your apartment door or being murdered.
By conflating those two clearly means you don't understand what's going on Russia and its Putin-controlled satellites like Belarus.
"Despite all the difficulties created by the Kiev authorities, over the past day, 29,733 people, including 3,502 children, were evacuated from dangerous areas of Ukraine and the Republics of Donbass to the territory of the Russian Federation without the participation of the Ukrainian side. And in total, since the beginning of the special military operation, there are 1,936,911 people, of which 307,423 are children," Mikhail Mizintsev, head of the National Defense Control Center of the Russian Federation, said at a briefing on Saturday.
https://www.interfax.ru/world/846957
There's a world of difference between living in Russia and using Yandex to search for how to kill Putin and living in the west and using Yandex to search for how to spin up a FastAPI server.
By conflating those two clearly means you don't understand that everyone isn't in the same situation as yourself.
People in Russia felt much safer using iCloud, Gmail or Google Drive. Of course they comply to some requests by Kremlin or police. But Yandex or VK just give information straight away often times without much procedure.
You misunderstand. The NSA went out of their way to tap Google's lines outside of the US, which made the leadership at Google furious. It accelerated the work to encrypt international fiber (I think many people were really bothered by the tcpdump of a bigtable RPC containing a user ID). I was at a conference shortly after an saw a SVP rip an NSA rep to pieces.
If Google is doing anything that is required of them legally as a US corp, I don't have a problem with that.
That's what Google claims, however the leaked slides claimed "direct access".
edit: Does it really matter if they setup an FTP server instead of direct access, when we know a request can literally ask for "all" data (see Verizon).
> When required to comply with these requests, we deliver that information to the US government — generally through secure FTP transfers and in person," Google spokesman Chris Gaither told Wired, among other news outlets. [1]
right, you're discussing the mechanism by which Google shares information with the US government- when required by law.
These systems don't give access to "all" data. Telephone companies are different- AT&T had a long standing, off the books agreement with US intelligence agencies (see Idea Factory for a fact-based discussion of what AT&T did) to share large amounts of information illegally.
We know that at least some companies were ordered to handover all data, continuously [1].
edit: I think we have enough evidence that I would assume that it's valid for the other companies on the slides, and if it's not true you'll have to provide some proof of that.
edit 2: [2]
> It searches that database and lets them listen to the calls or read the emails of everything that the NSA has stored, or look at the browsing histories or Google search terms that you've entered, and it also alerts them to any further activity that people connected to that email address or that IP address do in the future."
> Greenwald explained that while there are "legal constraints" on surveillance that require approval by the FISA court, these programs still allow analysts to search through data with little court approval or supervision.
> "There are legal constraints for how you can spy on Americans," Greenwald said. "You can't target them without going to the FISA court. But these systems allow analysts to listen to whatever emails they want, whatever telephone calls, browsing histories, Microsoft Word documents."
> "And it's all done with no need to go to a court, with no need to even get supervisor approval on the part of the analyst," he added.
edit 3:
> Equally unusual is the way the NSA extracts what it wants, according to the document: “Collection directly from the servers of these U.S. Service Providers: Microsoft, Yahoo, Google, Facebook, PalTalk, AOL, Skype, YouTube, Apple.” [3]
...has become such a huge tinfoil-hat kook that it taints anything he's ever said and done. I have no way of knowing when his brain-rot started to affect his writings, so I can't really trust the shit that seemed so convincing back in 2003 any more.
Weird how there is limited hard evidence of a secret, illegal government program... It's a lot more than I've seen than evidence for the claims of Yandex proactively sharing data with the Russian government.
> The difference is that checks and balances are much stronger in US,
You say that after we were talking about the NSA literally spying on US citizens, and without any proof? C'mon, are you really going to badger me about not having having the exact "hard evidence", and not even read my sources or provide ANY evidence yourself.
edit: Yes, it got challenged AFTER needing to be leaked by a whistleblower that still can't return to his home.
> Snowden was charged with theft, “unauthorized communication of national defense information” and “willful communication of classified communications intelligence information to an unauthorized person,” according to the complaint. The last two charges were brought under the 1917 Espionage Act.
The Espionage Act has no whistleblower protection. If the courts were allowed rule honestly and without political entanglements, there's no way the Espionage Act is constitutional at prima facia.
Every country is or has been bad in some context in different times, because doing the interests of your country often translates into doing harm to some others. Yandex is a really nice search engine and I agree it's excellent for image searches compared to Google results polluted with Pinterest links and other cancerous SEO rubbish.
But does Yandex echo propaganda for the Kremlin? Yes of course, as do Google and most of the others for their advertisers and governments, albeit to some different degrees. The usual approach when someone or some company with a controversial public image does something good with apparently no strings attached should be "Timeo Danaos et dona ferentes", that is, take the gift but don't trust them, mo matter if they're called Google, Microsoft, Yandex or whatever.
Their purpose is of course to associate the Yandex brand, and therefore Russia, to something perceived as good, have more people use it, so that more users will be exposed to their filtered news. Just be aware of that, take the good and ignore the rest.
Just take it to America from us, thank you. Along with VK. Great search engine and a social network. Full of backdoors for thugs and corrupt police, censorship and other lovely stuff... but you'll probably say that Google is full of it too, because you had no experience of living in Russia.
2003 certainly didn't have a better search engine. It only had a much smaller, open and un-SEO-biased Internet, making the indexing job correspondingly easier.
First of all regardless for political situation this is great step in making ML research actually open. So huge thanks for those developers who pushed to make it public. Still...
Yandex is in fact share responsibility for Russian government actions. While it impossible to fight censorship they could certainly shut down their News service completely.
Yandex could also certainly move more of their company and staff out of country. It was their deliberate choice stay in Russia and getting advantages on local market by using their political weight.
Google is US company which pays taxes in US. The company just like everyone in US obviously do share responsibility for what US government does. Fortunately Google and it's leadership actually does have political positions even if you dont like it.
In any case as unfortunate owner of Russian passport with friends and collegues in Ukraine I am more affected by Putins war than by anything US does.
So I want Yandex to be seen as part of Kremlin propoganda machine and threated accordingly. This company grew monopoly in many markets in Russia and they directly benefited from Putin regime. Since Ilya Segalovich died company started to be "out of politics" and this complete lack of any political activity lead country to these terrible events.
Blatantly incorrect. Google engages in egregious political censorship all the time. Including censorship for Russian government and censorship of US anti-war voices.
Considering the importance of the topic, and provided the linked articles actually contained examples of Google censoring anti-war propaganda, I believe the swipe would have been fully justified.
Highly emotional tone changes how the data affects the reader. If he is right, I would surely better remember next time that Google is in the same ballpark due to the insult hitting hard. If he is wrong, I will know better to ignore such claims in the future without a direct quote or something else that consumes less time than reading an entire linked article.
There is a difference between "Google does not censor anti-war content" and "Google does censor anti-war content, but usually has an excuse I find acceptable".
When a company puts Jon Lennon's Merry Xmas (War is Over) behind age restriction banner[1], the question stops being "Is there censorship?" and becomes about the logic of such censorship.
>The third one was temporary until Google stopped operating in Russia altogether.
They've censored other things on behest of the Russian government for years[2]. Again, I cannot fathom how people on a tech website like HN can be unaware of such things. This is common knowledge broadly covered on mainstream websites.
Precisely zero of what you mentioned so far is censoring anti-war content.
Even in the translation case (which I assume you mean by your "excuse" remark) the original source is still available as is. I am not even sure from the description what translation team it was talking about and what does it have to do with Google exactly. "translate company text for the Russian market" this passage sounds like it talks about translating Google's own interfaces, help pages, press releases, or support articles to Russian. E.g. no external voice is being censored.
If you read the resulting articles you'll find a few of them suggest that all the deaths were staged or committed by Ukrainians. Headlines like "The truth is out there..." or "Global lies..." are examples. There still are many results from mainstream western media on the other hand.
Google, in contrast, has zero results implying the deaths were staged or committed by Ukrainians.
Bad comment. First, can you please name the founder? Because according to wiki Ilya Segalovich never lived in Israel and Arkady_Volozh lives in Tel Aviv (Not a settlement). Both Jewish, so why present it as some "must-be-hidden-cause" connection with Israel?
Also, nothing shady from Israel side in term of sanctions. They have a large Jewish community in both Russia and Ukraine and need to be on good term with both to have their gov helping in supporting (or evacuating) them. Not to mention Russia has heavy presence in Syria which borders Israel. A conflict with Russia without anything like a NATO back is out of the question.
>Elena Bunina, who is Jewish, is stepping down from her role as CEO of Yandex LLC, 'Russia's Google,' amid the war in Ukraine. Sources confirm she is in Israel and has no intention of returning to Russia
This led me to look up similar information. Another article [1] looks into this a little more deeply.
I feel there is a resurgence of despising European dominance over the last 200 years and Israel is just another point here. Thus, we have material hypothesizing the illegitimacy of European Jews when the Jews of other
ethnicities may have better acceptance in the region. (But all of this is just a vague hypothesis.)
>> Be kind. Don't be snarky. Have curious conversation; don't cross-examine. Please don't fulminate. Please don't sneer, including at the rest of the community.
Not for being Russians, but for active participation in censorship by tweaking their news aggregation to show only hand picked government approved sources
Just read about Yandex.News which display censored news sources on Yandex frontpage that millions of people visit. There is really no hidden censorship here - they just follow Russian law that literally whitelist exclusively press controlled by the state.
They show Kremlin propoganda on their front page which makes Yandex part of Kremlin propoganda machine. They could have shut down news agregator, but they choose not to.
They are complying with russian censorship laws and it's gotten so bad that they are planning to sell the news service altogether to VK which is far worse than Yandex when it to how eager they are to enforce these laws and to work with cops.
Settling land that was recently taken from Palestinian families by force, often (literally) knocking the existing houses over with a bulldozer. In my opinion, that's a moral red flag to participate in such an atrocity.
What has Yandex got to do with any of that? Is everyone who lives in a country that commits atrocities "shady as fuck"?
<edit>: reading closely I see that the initial allegation did use the word "settlement", and indeed that would constitute ethically questionable behavior. However, a sibling comment refutes this.
I think you’re missing that the commenter was talking about Israeli settlers rather than Israelis in general? The settlers are controversial because they live in areas Israel occupied during the 6 day war in 1967. Much of the world considers their presence illegal, though Israel disputes that. Many, if not most, would consider living in these settlements a deliberate political provocation.
*edit: you hadn’t posted your edit yet. I have no idea if the allegation is truthful.
I'm not sure if the parent comment really meant specifically settlers, since some people consider all of Israel to be "settling on land stolen by Palestinians".
That said, you might be right about parent's intention, in which case I agree with your post. Israeli settlers are definitely considered to be doing it partially as a political move, though some are doing it for economic reasons.
Have you seen what America has done in the Middle East the last 20 years? If you want to make a moral point then you should start there instead of trying to grind whatever axe you have against Israel.
Why does it have to be mutually exclusive? The thread is about Israel, so they're pointing out Israel's crimes. You can be critical of multiple governments simultaneously.
You've posted 7 highly repetitive comments taking this thread straight into flamewar hell. That's not what this site is for, and destroys what it is for. If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules, we'd appreciate it.
Hijacking top comments when flamebait hasn't succeeded in setting an entire thread on fire yet is particularly abusive.
> Yandex Search engine hides the pictures of Bucha and Irpin massacre as well as Kharkiv and Mariupol destruction
That's just not true, try it yourself. It just does not display the latest images by default (though it's easily turned on in the filter settings), and that's why on the very day the news appeared on the Internet, people went crazy about that Yandex somehow "hides the truth"...
> Yandex News service ignores the genocide currently happening in Ukraine
That is actually required by the Russian regulations on news aggregator services. Yeah, those regulations are unfair and oppressive, but it's the local law to which Yandex must comply. And by the way, they're going to get rid of that toxic asset: https://techcrunch.com/2022/04/28/yandex-sells-news-zen-vk
(I suppose they can't just shut it down because the government threatens to nationalize Yandex in response)
> Yandex supports the Russian Terrorist regime
Can you please show any public statement from Yandex from which one could derive that?
That comes from where? The repercussions could have been very severe. The Russian government easily takes over and seizes control over "rogue companies". Russia is not a free country, my friend.
I am from Russia (though moved to Turkiye once war began) and I do have several friends working at Yandex on different positions including some quite high in management. So I well aware about their reasoning behind keeping working at Yandex.
Basically even today after war has began and tens of thousands were killed on both sides some of people working there still hold the illusion that they could continue to live in their bubble and continue to innovate in Russia like nothing happen. So no, they are not some poor IT company opressed by the government. Every employee who wanted to immigrate was able to move abroad.
6-10 years ago Yandex can certainly shut down their news service without being seized. Back in 2008-2012 one of Yandex co-founders and ex-CTO Ilya Segalovich was often visitor of street protests almost until his death in 2013 and this did not caused company to be seized.
> I suppose they can't just shut it down because the government threatens to nationalize Yandex in response
They can destroy equipment, safely delete all the code repositories etc. beforehand, thus rendering the company useless before the nationalization. But $$$ is more important.
> Can you please show any public statement from Yandex from which one could derive that?
Yandex pays tens/hundreds of millions in taxes and thus finances the war.
> Yandex pays tens/hundreds of millions in taxes and thus finances the war.
So what? You shut down the business with 20k employees, on the grounds that you do not agree with local regulations or because the government did bad? That is as far from reality as it gets.
> But $$$ is more important
Yeah, I think preserving the company is more important than that proposed suicide move (that wouldn't have worked anyway because the company is just too huge).
It's not just money, it's people, it's culture, it's all the great projects the company does.
The Russian army is not sponsored by Yandex. The money comes from selling natural resources... and mostly to Europe, surprise. It's about $1 billion per day. Tax money from private companies is nothing compared to that. So Europe is sponsoring the war way more than Yandex.
Let's then shut down the Europe, right? You can say — look, they're trying hard to get rid of Russian resources. But Yandex is also trying hard to become less dependent on Russian economy — they try to internationalize their business. And all that "canceling" of Yandex really doesn't help (it does the opposite in fact).
> So Europe is sponsoring the war way more than Yandex.
That's unfortunately true. The dependency is real, and it will take a long time to get rid of it.
> And all that "canceling" of Yandex really doesn't help (it does the opposite in fact).
Cancelling Yandex completely, as in forcing it to collapse, would help a lot. Yandex services (together with VK) are extremely important in the Russian society and economy, and their collapse would weaken Russia and its ability to wage (military/economic) war a lot. As such, this would be the best course of action (as mentioned before, burn the equipment, delete the code).
> Cancelling Yandex completely, as in forcing it to collapse, would help a lot.
It's just a wishful thinking. It wont "collapse", it would just become controlled by government, and then it truly becomes the instrument of the evil, so that not only News, but every service Yandex provides will serve the government needs. They will recruit soldiers through Yandex services, they make Yandex develop AI-controlled tanks and whatnot. Every thing that Yandex doesn't do now (because they do not actually support the war) — they will make it to do.
> their collapse would weaken Russia and its ability to wage (military/economic) war a lot
Of course not, because the Russian army and the military industrial complex is in no way dependent on the search engine and the food delivery service Yandex provides. You can destroy those, sure. People lifes get slightly worse, and then competitors catch up (there is a lot of competition to Yandex in Russia and they are not going to fade away).
> It's just a wishful thinking. It wont "collapse", it would just become controlled by government
That's why part of my suggestion is to burn the equipment/infrastructure and delete the code.
> They will recruit soldiers through Yandex services, they make Yandex develop AI-controlled tanks and whatnot
And the only thing stopping them now from doing that is that Yandex is not nationalized. Yeah, sure.
> Of course not, because the Russian army and the military industrial complex is in no way dependent on the search engine and the food delivery service Yandex provides.
Yandex provides many services, it's much like google - maps, translation, drive, mail etc. etc. Bringing it down would cripple many private and economic activities. Russia can't sustain waging wars if they don't have an economy and disgruntled population.
With the exception of VK, there isn't really any step-in competition to Yandex. Even if there was, losing all your data in e.g. mail/drive will have significant consequences.
> Russia can't sustain waging wars if they don't have an economy and disgruntled population.
Russia can wage wars on natural resource selling alone, it only needs to keep the gas and oil flowing through the infrastructure. All those private companies' activities the government sees mostly as a distraction, it doesn't give a damn about them (until they get in the way). They don't matter much.
It's very much unlike the Western economies where the private companies drive the economy. Russia is more like a giant oil and gas pipe with military industrial complex around that.
> exception of VK, there isn't really any step-in competition to Yandex
If we talk about city services (taxi, delivery, online shopping) there are lots of other players. Search/mail/social — then yeah, apart from VK not many. And VK is in fact state-owned. Yandex is not. So if Yandex leaves the scene, the only game in town would be state-owned. This only reinforces the evil regime.
> That's why part of my suggestion is to burn the equipment/infrastructure and delete the code.
It's pretty unrealistic. You can do it in small company, easy. In a huge decentralized company I don't know how one could even pull that off. There simply isn't a way to "delete all the code", nor a single place you could burn all the servers. It just doesn't have a kill switch. And the moment you try that, the government swoops in and goodbye the company.
> And the only thing stopping them now from doing that is that Yandex is not nationalized. Yeah, sure.
If Yandex gets nationalized, the government will replace the management and the uncooperative employees. Most of them would just leave the day it happens. It won't be Yandex anymore of course. That is essentially the same as killing the company, but worse, as the remnants could still be used for evil.
> Russia can wage wars on natural resource selling alone, it only needs to keep the gas and oil flowing through the infrastructure.
With sanctions it can't.
> So if Yandex leaves the scene, the only game in town would be state-owned. This only reinforces the evil regime.
It doesn't. Nationalization of companies rarely works well. Even then they can be hurt, made unprofitable, forcing the evil regime to divert resources.
> In a huge decentralized company I don't know how one could even pull that off.
A lot of things are not that decentralized. GitHub/GitLab is actually central, just delete it. Dev machines can be wiped out remotely. Private keys, certs, credentials are stored somewhere more or less centrally. Delete the user data on the servers.
You can make a lot of damage. It might not be perfect, somewhere git clones might survive, but it will cause a major outage/data loss.
> And the moment you try that, the government swoops in and goodbye the company.
You overestimate the ability of state to react quickly enough. If you plan ahead, this can be pulled off in short time.
> That is essentially the same as killing the company
So what? You're somehow attached to keeping the company afloat. It has no value compared to hundreds of people being killed daily in the war as we speak.
It does not need to "go well". All the government needs is control. They have VK (controlled by Putin's people). They will be happy with either outcome with Yandex — kill Yandex entirely, and people will just switch to using VK services. Make Yandex controlled by the government, and then its resources will be used for evil purposes. So either way it favors the government, reducing overall freedom.
> It has no value
It has the value. I insist that the net profit from "killing Yandex" is strictly negative, as it has not even zero effect on preventing actual people from getting killed — the actual effect would be the other way around.
> With sanctions it can't.
Unfortunately, not true. Even if Europe stops buying Russian resources today, the remaining profits from selling to China, India and etc. would cover the expenses. The oil and gas prices will greatly rise (it already happens), so that would compensate the losing of the European markets.
Back on topic, are you in favor of releasing language models if it means we won't be able to prevent the Russians from using them for propaganda for example?
As long as we're going on tangents, according to the Zach Vorhies leak, Google censors lots and lots of topics for blatantly political reasons[1].
This is often overlooked and it's a fair point in defence of the people working for Yandex. You can't judge someone just for working for Yandex or even most Russian companies. The people who have voiced concern are already out of the company and it's perfectly reasonable that the rest would like to keep their jobs, especially in uncertain economic times with all these sanctions against Russia.
However, this also implies that Yandex, as a company, cannot be trusted. It's not the researcher's fault, but they simply aren't allowed to work in a way that doesn't reinforced the Russian government's bias. As usual, the Russian government is the real villain here, but its authoritarian rule "infects" any company and country it has control over.
It can be assumed that the people working for Yandex are also victims of their abusive government, but that doesn't change the fact that their work is unlikely to be trusted outside the Russian sphere of influence.
From what I've seen, telling the truth in authoritian countries doesn't end up well.
To the best of my knowledge, they are a Russian company - it's not like they can just tell the truth and move away from Russia that easily, so I think (and hope?) they're just playing a political game.
They will simply have their company taken away from them.
Nevertheless, they had many years before the war to start marking their news as 'Official'. Or sell the news service. They certainly could have done so. This would have solved their image problems.
Russian company follows Russian propaganda rules, that's hardly news. It's pretty clear that concepts like "free press" and "freedom of information" aren't compatible with the Russian regime and expecting such features from a company operating mostly in Russia is kind of pointless. It should be obvious that anything Yandex (or any company targeting Russia, really) should be met with a good deal of scepticism. Companies like Yandex and Baidu can still deliver usable research, though, as long as you realise with what kind of perspective their code was written and their algorithm trained.
In a similar vain, Microsoft has censored "tank man" from their image search (and that of all their image search customers, such as DuckDuckGo). Google is a more transparent about their censorship, usually showing a link or explanation why they remove certain information at the bottom of the page, but it still reflects the values of western civilisation, for example by delisting Russian propaganda such as RT.
These biases are everywhere in all research into this field. The Russian situation is obviously worse than that in many other countries, but you should never forget the bias that AI models from free countries have been trained with either.
>The ex-head of news at Russia's largest internet company has advice for his former colleagues: quit.
>Lev Gershenzon worked at Yandex in various roles for four years, according to his LinkedIn profile. He took to Facebook early Tuesday morning to warn people still working at the company — which is one of the largest search engines in Russia — that it was contributing to the censorship of the country's invasion into Ukraine.
>"The fact that a significant part of the Russian population may believe that there is no war is the basis and driving force of this war," Gershenzon wrote, also tagging six of his former coworkers. "Today, Yandex is a key element in hiding information about war. Every day and hour of such "news" costs human lives. And you, my former colleagues, are also responsible for this."
Those are trivial to verify. If you can't do your 5 minutes of research, OP should not feel obliged to humor your attempt to create busy work for them.
Your comparison fails a test of facts. Yandex actively censors any perspective not approved by the Kremlin. Google does not do anything comparable to this.
Not to mention that Yandex does it in Russia because the law forces them to, while Google does it happily just to maintain the political status quo, of which they are a part of.
There is lots of content Google bans/hides. Copyrighted content, Adult content, child pornography, official secrets, etc.
I don't think thats so different from other countries which also have a (partially overlapping) list of whats not allowed.
Normally, when people think about that they say "well pictures of naked children are morally wrong, whereas talking about LGBTQ stuff is fine". But people in other parts of the world might have different morals and might think the other way around.
- Disparage or belittle victims of violence or tragedy.
- Deny an atrocity.
- We don’t allow content that promotes terrorist or extremist acts, which includes recruitment, inciting violence, or the celebration of terrorist attacks.
Now I don't think these are bad rules, but they are rules that very much depend on the official narrative. A terrorist to one is a freedom fighter to another. These are rules that can be applied as wanted.
I have huge respect for developers at Yandex. It's kind of sad that achievements like these are tainted by the fact that they come from Russia (and I speak as a Ukrainian). I wonder if the permissive license is able to mitigate that.
Well... I'm sorry if I reach for the reductio at Hitlerum, but any achievements Nazi scientists might have reached in concentration camps are definitely tainted. Similarly, achievements in the field of online consumer analysis in a country where consumer-privacy protections are nonexistent, surely should be considered tainted...?
Yandex search, which works on top of their AI is straight out propaganda machine for Russian government. Every time you go to yandex.ru, you're greeted with curated happy news about how Ukrainians are killing themselves, and russians are not fascists at all.
Their government does. They empower it. Just to be clear, it is their army, and their government doing the deed. They elected, they pay salaries to.
They are and we use them all the same. Rockets fly almost every week now, jet engines are the most common form of propulsion, tons of medicine forcibly tested on innocent people is on the market, and to pass up any of that technology would be pure idiocy.
That is why the US imported Nazi scientists in bulk to work in their labs. Starting with Wernher von Braun who had become the heart of the US space program. Soviets did the same at the time.
If you are so conscious about consuming tainted fruits the only way to escape is to be living on some deserted island catching your own food.
Wow. Yes you should have refrained from this. You are comparing Nazi scientists who killed many innocents to some software engineers working on a cool project and releasing it for free to the world.
This is nothing like that, because the question is not one of their own personal actions - but of their nationality or ethnicity. That, until about 4 months ago, would have been widely acknowledged as racism.
The difference between holding values, and holding values when convenient rather sums up the entirety of human history in one phrase.
I'm ethnically Russian (mostly), although I've never been to that country and have less influence on their foreign policy than your average European (who at least has some say in how his own country behaves towards Russia — and we've seen how well they managed that). I don't know how this would translate to the real world if I lived in "the West", but from what I'm seeing on the internet for the past few months, it definitely is about ethnicity. I've been called many things and blamed for everything bad that has happened since 1945, and not many seem to care that half of Putin's army consists of people of Asian and Caucasian ethnicities, and there are many Russians in the Ukrainian army. If you go to places like r/worldnews, there are open calls for violence that have strong fascist overtones, and those seem to be getting more popular.
Can't say the same about HN, it's one of the few places that seems to have kept its sanity (for now?)
How is your Russian ethnicity different from average Ukrainian?
But yes, there is, sadly, some discrimination. It's got nothing to do with ethnicity though; it's the same thing that happened to ordinary Germans in 1939-1945, and for the same reasons.
Yandex is arguably the biggest censoring and propaganda machine in Russia.
Yandex News is IIRC the biggest news media in Russia.
It filtered all results on protests and opposition resources leaving only government propaganda. Same with war. Filtering not meaning downranking. Just straight up not showing.
Editors were fired for not staying in line until it was completely sterilized and filled with pro-war propaganda.
Doesn't mean you can't be punished for the actions of the government though. See: Western companies and government pulling out of Russia or issuing sanctions on private individuals. I think it's even worse to say "we don't think you're guilty, but we do think you should be punished."
And yet sanctions never work as a deterrent. Cuba is still socialist after 60 years of sanctions. Great deterrent! No, sanctions just punish generation after generation of innocent people and serve no other purpose.
If you still mantain that the purpose is deterrence, then you must be a fool or worse, since it never works! Can't you learn from the past?
I will make an important point: sanctions CAN work only to prevent something from happening. Once that happened, e.g. the war started, there's no sanction that can stop it and that's exactly when sanctions stop being a deterrent and start being a punishment.
Obviously you have a much stronger opinion on that than I do, but at the very least, sanctions should deter other countries from acting in a similar fashion. For example, if — hypothetically — China is considering invading Taiwan, they will have to factor in that the Western world will stop doing business with them. If the West hadn’t put sanctions in place for Russia, that would lessen the concern for China. Maybe you think that isn’t worth it — that’s a valid personal judgement of course.
> Western companies and government pulling out of Russia
you can't really blame the companies for not wanting to be associated in any way with a nazist regime genociding a neighboring country.
And Starbucks or Mercedes pulling out of Russia isn't punishment. It is freedom of association and economic activity. Russians are whining about "punishment" because they have no idea about freedom, and that is them getting a bit of taste of it. They think they can plaster whole country with their swastika - "Z" - in enthusiastic support of the bloody genocide while the whole word shouldn't be able to express its disgust at those happenings.
Especially funny how Russians are cry-baby style whining about supposed violation of their property rights by the West sanctions while Russians have been violating property rights of more than 40 million of Ukrainians (even if we don't consider all the mass killing and raping of civilians that Russians have been doing there). The deep and profound disintegration of any morals in my old country is stunning.
>or issuing sanctions on private individuals.
due to the size of their wealth and the de-facto rules of economic activity in Russia those aren't private wealth of private individuals - they are integral part of that nazist regime, and thus they are guilty too.
And for the sanctioned Russian government officials - that is for example the Roskosmos CEO Rogozin, who is one of the main founders of the Russian Nazi movement "Motherland" (people from which has since taken prominent roles across the Russian government and the ruling Party) and who is one of the most prominent voices around Putin and the Putin's favorite, giving a Nazi salute and the end of his Nazi speech at the Russian Nazi march in Moscow. The specific phrase they all give Nazi salute to is "Glory to Russia!".
Say what you want, most people in Eastern Europe, Ukraine and even lots of people in Russia would prefer to be under protection of NATO.
If it were not for NATO, Russian rapists would already be in Tallinn, Vilnius, Helsinki. Claiming they were offended by historical injustices therefore women should be raped and men shot dead ("denazified").
Unfortunately firing Lyudmila Denisova doesn't unrape all the victims of all those mass rapes by Russian soldiers in Ukraine. She was doing propaganda and Ukraine rightly fired her for that.
Russian soldiers have been shooting civilians at will (it is called "safari" in Russian army, and those Bucha streets - Yablunska and Vokzalnaya - with the bodies lying along the street is an example of such a "safari"). That would be an unique miracle in history if the soldiers wouldn't rape given such power in such conditions (for example those mass rapes by Soviet soldiers - 2 million of women were raped - back then in Germany in 1945 illustrate a result of such a power by the occupying force). And there is no miracle - just google Ukrainian rapes.
No, there are credible and quite horrifying reports[1]. Also some of the actual rapists were found out and victimes stepped forward. Please don't spew Russian propaganda.
How is it Russian propaganda? It's been widely reported by western sources that Denisova was fired by Ukrainian officials for lying specifically about the mass rape claims.
I'm not suggesting there have been no rapes or that Russians are the good guys. Just that the reports of mass rapes were fabricated (Denisova admitted she thought it would help Ukraine obtain more sympathy and weapons from the west).
They might as well agree with it. Or agree with some of it. It just strikes me how everyone wants to put everyone else into these well-defined black-and-white boxes. I get that it’s simpler, but it’s often at odds with reality.
Because most of those "everyone's" are not facing the choices themselves and are basically keyboard warriors. Let's see what they say when they'll be asked to sacrifice their own well being to be on a "high moral ground".
I think we're already seeing that in recent French elections. I have a feeling this is only the beginning.
As a relatively neutral party in all of this whose country hasn't tainted itself (but who nevertheless spent all my live under a similar autocracy), I can't help but shake my head at keyboard revolutionaries who definitely would have overthrown the regime, if only they lived in Russia. You just have no freaking idea what you're talking about. Guard your democracy as best you can so you don't have to find out.
Basically they've taken a step further with censorship of all media not controlled by government which at the time (2014) couldn't been penalized whatsoever.
"Couldn't be penalized whatsoever" wasn't enough? In broader view, government could start taking hostages like they did with Google last December, but that wouldn't help reaching the goal even a small bit back then since there was no leverage on technical side of things.
> What's the alternative? Open rebellion against the state?
Yes. HTH!
To elaborate: At some stage, that becomes the only acceptable alternative; not doing so is morally culpable.
The world learned that in 1945, two ways:
1) "I was only following orders" was deemed not a valid excuse at the Nürnberg trials; not refusing orders like that is complicity; and
2) Germany as a whole was de-Nazified. Just like Russia needs to be now. (But more thoroughly: in Germany's case, it was an aberration of a dozen years; in Russia's, it's a millennium of unbroken history of totalitarianism.)
Too bad the world seems to have forgotten those lessons since then.
Yandex has a news block right in the middle of its front page. Only "approved" news sources could be shown there.
For people not specifically searching for recent updates on a topic or particular media outlets it represents the news.
As far as I remember they wanted to remove this block instead of completely instead of censoring, but they were not allowed to. I think it was described in one of the documentaries about Yandex, but my memories are vague now.
I know, but the OP meant Search, not News. Google and even DDG can downrank certain news sources and I wouldn't be very surprized if Yandex did too, but I'd really like to see examples.
This is a stupid argument. Ukrainians are being killed and you compare that to fear of being arrested?
It's a nice excuse. "Oh yes I don't support my government, but you know these arrests, I'd rather stay in my cosy home and enjoy my tea. Now could you please lift the sanctions? I already said I don't support my government, why normal people like me should suffer?" etc etc
>""Oh yes I don't support my government, but you know these arrests, I'd rather stay in my cosy home and enjoy my tea."
Why are you so surprised? This is exactly how most of the population behaves everywhere. People go about their business and "support" criminal actions of their governments all the time. This includes the West. Our governments have no problems exterminating, starving and displacing people (as long as they're the "right" people to mess with) while the majority of the population is going on merrily about their business. And Europe keeps buying Russian stuff even now while "Ukrainians are being killed". Where are the mass protests and fights with the police?
Things might change when "messing" with people will ALWAYS have the consequences for ANY country. But this is not what is happening and is unlikely to change. We have no extra terrestrial entity to police us in impartial way.
It is as classic as your own standard script. I've just explained what is going on. I did not want to single out the US as it happens everywhere. But if you are so touchy maybe you should not have "supported" that particular subject. Remind me what was your punishment?
Americans just love to talk about themselves. Who cares about Russians under Putin's oppression or Ukrainians being exterminated. Let's talk about your government, Bush, Trump and Google.
This is not true. I can assure you that tons of Muslims and people from the middle east also care about the fact that the same actors who gleefully engineered wars on terror that led to a million people dying and entire countries getting devastated, with absolutely 0 consequences for them, are now so very keen to hold other people accountable for illegitimate invasions.
No one likes hypocrisy, especially when it is coming from the same westerners that at most protested for a few weeks back in 2003 when their own countries bombed us for 2 decades, that are now calling for other people to get arrested and possibly tortured/executed by putin's regime because that's just the right thing(tm) to do to stop the war. It would be laughable if it wasn't despicable.
Yes posting that Wikipedia link isn't a magic way to deflect from the fact that the iraq war led to a million people dead. And that people are still dying from the war on terror. It's amazing that you just said that people in ukraine are still dying, and that just saying that you don't support your government from the comfort of your couch isn't enough... and then you proceeded to link an article specifically so that you can ignore/deflect the deaths that are also happening now and that should be (according to your own argument) much more important than any of your own comfort or even liberty?
"Yes hundreds of thousands of Muslims died and are still dying, but bringing it up or asking me to do anything about is fallacious! Checkmate"
As you said, who cares about debate tricks when people in the middle east are still dying from the war on terror as we spead? Why are you holding other people to standards that you don't even pretend to hold yourself to? You are expecting people to get arrested to prevent deaths and talk about the situation in ukraine, but I guess making you uncomfortable with "whataboutism" is the limit?
> If you saw people get arrested as soon as they start protesting, what would you do?
Probably move if that's anywhere near a possibility, and if not, cowardly stay as unnoticeable as possible. I know that at least in Finland Russian refugees are mostly welcome (although the border might be closed right now), even if they probably will face a lot of scrutiny from various authorities for obvious reasons. Most certainly it's nothing like the attention such people would face in Russia.
We fondly remember even the smallest acts of defiance that ordinary Germans acted out against their regime during 1933-1945. We all would like to be those people in times of crisis, but obviously most of us are not. They were probably ultimately pretty futile acts during that time, though, but put together with all the other actions that happened against the Nazis played a significant grander role. And we know that more than a few significant Jewish scientists and engineers fled from Nazi Germany and made significant contributions to the war effort. For instance, one guy called Einstein.
Move where, exactly? I have a few friends in Russia and they sit on their asses because there's nowhere for them to go. You don't exactly qualify as a refugee unless the state is after you (which you can trigger very easily, but then you may not be able to leave the country), and even then it's not a given.
It would probably fit better to compare "small actions" to the people of the DDR which were protesting against their regime and ultimately helped to bring it down.
Every russian citizen pays for death and destruction in Ukraine. With taxes, with national wealth.
What should russians do you ask? Fight. I did it in Ukraine in 2004. Then in 2014. I didn't run from cops, I didn't let them take my friends. But regardless, now we pay with our lives, being subjected to genocide because of russian cowardliness.
because so far they are all just paying for the genocide.
Maybe not be "apolitical" for 20 years that led to this?
Russian IT and media were showered with relatively high wages to stay quiet while last semblance of elections was finally destroyed, independent media was took over by pro-Putin oligarchs and activists were crushed or murdered.
As was the sarcastic saying coined recently, "if you are apolitical then bullets don't hit you".
Often, yes. In many places we're even wary of using US-based services at all. The EU has been having a bit of a back and forth with the US and many companies because EU law prohibits foreign states gaining access to personal information whereas US law requires foreign personal information transfers to CC the NSA on request. There's some big legislative deadlocks where American companies simply cannot operate fully in countries other than the US because the US laws require wild violations of basic rights to privacy of anyone who isn't a US citizen.
Lots of things have to be cleared for backdoors, Intel and AMD are scary with their built-in ME and whatever AMD had, I can't exactly remember, proprietary hardware in general is very scary outside the US due to surveillance and possible backdoors, it's kind of weird. Same goes for China, though they don't surveil foreigners exactly as hard, at least here they don't. It's not exactly an ideal situation and I think there should be agreements done internationally on stuff like this to keep the US and China out of our devices or allowing them to kindly fuck off entirely.
Not exactly the same kind of taint though, your products aren't as much morally tainted as they are simply dangerous to use, like little telescreens you have to carry around.
Yeah. US developers might create great software but due to the Cloud Act, Patriot Act and whatnot you don't want to use it for anything that's not public data at first. It's just not protected against unauthorized access.
Underrated comment, considering the history of NATO expansion, color revolutions and the hundreds of thousands killed in pursuit of reckless ideological overt or covert warfare.
Probably about same as share as of USSR's. They just were open about it. Also, crucial word here is "former". Like, there is a big difference between being of a former fascist state, and carrying on ongoing genocide.
Did they bias it toward ru propaganda talking points?
Edit: I would like to see more details in addition to size and languages (en, ru) about training data. For example, did they use their own Yandex.news (a cesspool of propoganda)?
You've made a version of this comment 3 times in this thread now. It's shallow and flamebaity, and the repetition just adds noise and does no good, so please don't keep doing that. I understand the strong feelings, but the rules still apply—in fact that's when they apply most.
Thanks for reminder, I deleted other two comments which were more flamebaity. My overall point still stands - they did not give any details other than size on the training data. This is crucial (I train LLMs for a living)
You look at OpenAI and how they don't release their models mainly because they fear "bad people" will use them for "bad stuff." This is the trend in the west. Technology is too powerful, we must control it! Russia is like... Hey, we are the bad guys you're talking about so who are we keeping this technology from? The west has bigger language models than we do, so who cares. Also their attitude to copyright and patents, etc. They don't care because that's not how their economy makes money. Cory Doctorow's end of general purpose computing[1] and locked down everything is very fast approaching. I'm glad the Russians are around and aren't very interested in that project.
[1]https://csclub.uwaterloo.ca/resources/tech-talks/cory-doctor...