I think 'rearchitect' should be interpreted as 'redesign the UI', not as 'start from scratch'. Most big changes seem limited to the UI (and done as part of a Qt -> QML migration). And it looks they'll do even those piecewise:
> In order to get the release of 4.0 out as fast as possible, we will be porting over many of our less used interface elements and dialogs directly from MuseScore 3. The plan is to gradually replace these with redesigned versions (built in QML) in subsequent releases (4.1, 4.2, etc.).
The original story is by the Dutch magazine 'Vrij Nederland': https://www.vn.nl/trump-twitter-hacked-again/ . Most Dutch news media run the story currently, including broadcasters NOS and RTL, news sites nu.nl and Tweakers, and (the websites of) newspapers Volkskrant, Parool, Telegraaf.
That's the case because you speak English. Think about the last time you had to navigate a website in a language you didn't speak (in my case e.g. Chinese or Russian). I remember being very happy with a few (incomplete) clues in English about were to look and what to expect.
By all means, advocate for making it easy to change the language back to the original. But this stance will decrease accessibility.
In case of technical websites, you have to know English anyway, because even if there is a broken translation of the documentation, function names, enums and all the cool stuff remains in the original language anyway.
You don't, plenty of developers are getting around having to know English as there are other resources for learning.
I think it's a dangerous assumption that everyone knows English. It's especially true in the Spanish-speaking world, where I know developers who can't have a conversation in English but are good enough to be employable.
Last I heard, KaiOS was working on migrating to Blink (https://news.ycombinator.com/item?id=19012709). Looks like Gecko isn't out of the running yet, which is good news from a browser engine diversity point of view.
I'm curious how this'll work out, especially with Mozilla making good progress on the new Firefox for Android as well lately.
In practise, end-users will probably not setup a TOR(-style) network themselves. That means you'll need to do it for them, so they still have to trust you. At that point, just not storing their IP address is probably easier.
There are still advantages to the TOR-style solution (you wouldn't have the ability to track users without resorting to backdooring), but the slowdown and extra complexity is probably not worth it in most situations.
> However, there is an unspoken claim that the gradient update doesn't carry enough information about the user data to reconstruct any of it server-side.
This was a concern for me as well, but the 'Privacy' section of the post addresses this. In short, the algorithm is adapted such that the influence of a single user on the model is limited, and noise is added. I'm not knowledgable enough on differential privacy to know if that covers all possible privacy attacks, but it looks like a good start.
Personally, I'm now more worried about adversaries trying to mess up the model. How many clients need to submit fake updates for the training process to never converge? If it's 50% that's probably fine, but I'm afraid a much smaller amount of users could derail the process already.
VERY interesting questions! Unfortunatly I can't answer the questions, since I don't know enough about them.
To make the literature search easier: Your second cocern is called "poisioning attacks" and is one of the problems "adversarial machine learning" is concerned with.
> In short, the algorithm is adapted such that the influence of a single user on the model is limited, and noise is added.
But in any case (added noise or not), the user-provided weight-updates are improving the model in a certain way. So I suppose that, based on this fact, it inevitably leaks information about the user. For example, assume we are training cat and dog images. Run a test with 1000 validation images of cats and see how much the network got right. Then add the user-provided updates, and see how much the network got right. The difference tells us something about the user's images. This doesn't necessarily work in every case, but statistically it could paint a picture.
Your customer service should know how to deal with product warranty. Or should be able to handle a request to cancel an online purchase for 14 days after the order ([1]). How is making a request to access your personal data different?
It's interesting to see how the GDPR seems to clash with some popular data models. For example, git.
Rewriting history of a shared branch is disastrous, but it's currently the only way to redact, say, an e-mail address someone committed with a couple of years ago. I'm curious how the various code hosting sides plan to handle that. Perhaps we'll see an extension of the data model that links commits to committer UUIDs, with the actual information being linked to that, making removal easier.
Apparently Git is ok by GDPR as data subjects do not have the right to erasure if the information is meant for archiving purposes in the public interest [1].
> Paragraphs 1 and 2 shall not apply to the extent that processing is necessary:
> [...]
> (d) for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes in accordance with Article 89(1) in so far as the right referred to in paragraph 1 is likely to render impossible or seriously impair the achievement of the objectives of that processing;
(emphasis mine)
I'd not say redacting a git repository does 'seriously impair' processing for archiving purposes. All the data (with the exception of the redacted e-mail) is still there, after all.
Still, the hashes will have changed, making the repo less useful for current users. But that has nothing to do with archival.
> [...] where processing is necessary for the performance of a task carried out in the public interest [...] the processing should have a basis in Union or Member State law.
I don't think that purpose of archiving has a basis in law.
That said, I do remember my law professor calling the 'right to be forgotten' one of the weaker parts of the GDPR, and I'm not an expert, so it's possible I'm missing something.
And it's also protected by the right of freedom of speech: the entity operating the git server has the right to inform the public of who committed which changes. The GDPR explicitly recognizes "exercising the right of freedom of expression and information", although I'm not sure how European courts would interpret this provision. But for an American entity without a physical presence or assets in Europe, any EU judgment would be quickly quashed by American courts.
And Facebook has the right to 'inform' another company about all data it collected from its users (only in exchange for a nice sum of money!)
Except in the EU, freedom of speech and privacy are both considered human rights, which need to be weighed against each other. Freedom of speech will win when someone uses the GDPR to try to censor e.g. an online news article with some personal facts. But it won't for my Facebook tongue-in-cheek example, and I doubt it will for the redacted committer example either.
How would the judgement be quashed by american courts? No american court has jurisdiction over European courts. For an entity without presence or business in Europe, enforcing a european court decision might be a problem, but that’s a different matter. I’m sure the EU will find a way if the sum is sufficiently high.
Sorry, I was sloppy in saying that the judgment itself would be quashed. What I meant is that any attempt to enforce the judgment would be quashed. Since (by assumption) the defendant doesn't any assets in Europe to pay the fine, enforcing the judgment would require going after the defendant's assets located in the US. American courts will typically enforce foreign judgments from 'friendly' jurisdictions, but if the judgment is incompatible with American law, American courts will quash any attempt to enforce the judgment in the US.
Of course, Facebook and other large American corporations can be expected to comply with GDPR, since the cost of compliance is much less than the opportunity cost of being excluded from the EU market.
Sure, but these are two entirely different matters. Having an open, unenforced judgment against you might lead to complications, for example if the defendant happens to travel to the EU. It’s unlikely that a minor fine will be enforced by snatching the defendant at the airport, but it could at least legally an option.
Or the defendant may later open up a German subsidiary or plan on selling to a company with a german subsidiary. Things would get complicated in those cases.
So it’s important to be somewhat precise here - no enforcement doesn’t equal squashed judgment.
There's an exception 'for the establishment, exercise or defence of legal claims.', but there's situations where that would not apply. E.g. commits fixing a single spelling mistake are probably not copyrightable.
Also, I doubt you can just keep a copy of all data you ever process, just because it might some day be useful as legal evidence.
Why would you say that? If you can get sued for a piece of code written 30 years ago, then it seems legitimate to me to store legal evidence for at least 30 years. As far as I know there is no time limit to being sued over something.
That makes sense for repository users keeping a private copy.
But, I was thinking more about companies like Github. If they can hide behind that clause for every single repo they host, the GDPR as a whole becomes useless. Pretty much everything could serve as evidence one day. As far as I know, judges don't like 'hacks' like that.
Also, additionally, code hosting platforms argue they are service providers and should not be liable for copyright infringement as long as they apply notice and takedown.
There's a middle road, you know: being bilingual. To run with the Icelandic example, you'll be hard pressed to find someone who speaks it but not English.
Where I live, there's also a local language (Frisian) spoken beside the 'bigger' language (Dutch). And here too, lots of people speak English. Or if not, you might get lucky with German.
A generation ago, Dutch minority dialects & languages were really discouraged and some were nearly killed. Kids were disciplined in school for using them. A generation before that, British accents/dialects^ actively suppressed to give way to standard (King's) English.
I think there were almost no advantages to this dialect killing.
Small minority dialects are at a danger of dying out, not crowding out the bigger languages. This is how all of europe worked for hundreds of years. My Grandfather grew up speaking yiddish (german dialect) in school, czech in town & Hungarian at school. These are completely unrelated and individually difficult languages. He later went to college in German & Latin, later on English. He spoke 10 languages in total, most fluently. I knew him in a language that he learned in his 40s. This was normal in his day. They weren't afraid of languages then.
Anyway... if your "home" language is a tiny, local one. There is no danger that you will be monolingual in a commercially useless language. You will speak a big language too. Speaking 2 makes the 3rd one easier to learn.
^more on the dialect end of the spectrum than most people realize.
The US culture/society also tends to kill off non-English languages. Spanish is large enough and with a 'renewable' resource of Spanish speakers from other parts of the Americas that it manages to survive, but other languages do not fare as well.
I don't think that's quite right. Chinese is a notable counterexample in a lot of places, as well as Vietnamese and Tagalog and some others. But Spanish definitely has a more universal geographic distribution in the U.S. than any other non-English language.
There's enough immigration that other languages can get 'renewed' too, but the general pattern is for languages to die off by the second generation. This is in stark contrast to what happens, say, in India where speakers maintain ancestral languages much better even when they have been resident in area which employs a different language for many generations. It seems to come down to cultural handling of mono- vs multi-lingualism.
But is it a sustainable road? French-Canadians have been complaining for many years that Montreal's "bilingual" neighborhoods become English-speaking in a decade or so, and "French-speaking" neighborhoods become bilingual. And this is despite extensive government efforts to encourage or even enforce the use of French. You can see the same trends in Catalonia, Wales, Ireland, etc.
Where the government intervenes in the opposite direction, the transition can be much more rapid. Visiting Strasbourg (in Alsace, France), people's surnames, street and place names, and the local cuisine are all German, but nobody speaks a word of it. It was amazing (and slightly depressing) to see how in 2-3 generations a city could forget the language it spoke for nigh-on 1,500 years.
The way things are going, I wouldn't be surprised if Dutch were considered a dying language 50 years from now.
Strasbourg was French from 1681, then German in 1871, then French in 1918. Then only briefly German in WWII.
So it wasn't German very long, only 50 years.
As for Montreal, it will be interesting to see how the city evolves. I notice more English in my neighbourhood than when I moved in (8 years ago). But, there is also more French in the old anglo neighbourhoods of the west.
One factor is that a lot of "allophones" are perfectly fluent in both English and French. When you add in the francophone tendency to switch to English when dealing with anyone who shows even a whiff of not being a native francophone, a lot of francophone majority neighbourhoods may see English conversation.
(I'm perfectly fluent and speak French in public. But for the life of me I can't francophone friends to speak French. I think they all want to practice their English. Also, the Quebecois that care about Anglicization probably don't move to urban montreal)
“Germany” hasn’t been a country very long, only since 1871, but an identifiable German culture has existed since Roman times. German (or precursors) was spoken in Alsace for well over a millennium before declining and dying out in the 19th and 20th centuries.
>Visiting Strasbourg (in Alsace, France), people's surnames, street and place names, and the local cuisine are all German, but nobody speaks a word of it.
Maybe they don't speak it normally, but I'm pretty sure a lot of people there can speak German. You can literally take a city bus (or walk) over the border into a German town Kiel. And everyone speaks German there. There are people who work in Strasbourg and live in Kiel, or vice versa.
The Alsatian dialect was always more of a rural thing. Both French and the Nazis suppressed it so unfortunately it's pretty rare these days. Still, according to Wikipedia, 43% of adults in Alsace could speak it in 1999. [1]
Sure, there's more interest in Welsh in recent decades, but I strongly suspect it will resume declining if it hasn't already.
I used Welsh, Irish, Catalan and French-in-Quebec as examples because they're all languages that declined (to varying degrees) over several centuries under (varying degrees of) government suppression, experienced a partial rebound in interest and popularity in the 20th century after government policy was changed to encourage their use, but ultimately returned to the same trend of declining usage, something like
I don't think so, or the language would have died out already. I'd argue it's more about whether children learn it as their primary language.
Also, because of concerns of linguistic extinction, similar to the ones mentioned for Icelandic in the article, you see somewhat of a counter-movement as well. This caused e.g. a special status of the language by law (see e.g. [1]), and also:
* it's a mandatory school subject
* you are entitled to using it in government interactions, e.g. in court
* a small part of the public television is in Frisian, and there's also a regional tv/radio channel using it exclusively
* place name signs are often at least bilingual, and sometimes Frisian-only.
I'd argue it's more about whether children learn it as their primary language.
I'd say even that can't accomplish much by itself, which is why the other things you listed are so important. A lot of Mexican-American kids in my hometown spoke exclusively Spanish at home, and now as adults if you ask them if they can speak Spanish most of them say "not really." They're embarrassed if they have to speak Spanish with someone from Mexico who has an adult vocabulary and a sophisticated grasp of the language. When your education, media, and social life is entirely in English, you end up so much more capable in English that that's all you want to speak. Even with their friends who also spoke Spanish at home, if the context of the conversation was their English-language schoolwork or an American band or an American TV show, it was more natural to talk about it in English; if there was somebody present they didn't know, it was safe to assume they spoke English; et cetera.
The local language is likely to have a lot of associations and nuance. Common languages can lead to cultural collapse; the world becomes less rich. "Chinglish" (bad translations) are one extreme outcome of this, but it's not always funny or trivial.
> In order to get the release of 4.0 out as fast as possible, we will be porting over many of our less used interface elements and dialogs directly from MuseScore 3. The plan is to gradually replace these with redesigned versions (built in QML) in subsequent releases (4.1, 4.2, etc.).
That doesn't sound unreasonable to me.