Let me tell you the story of how I got involved in the Perl 6 community.
I was trying to find out how Perl 6 was progressing, and found a broken link on a related website (pugscode.org, now defunct). So I went into the #perl6 IRC channel to report it.
Within three minutes, Audrey Tang (to become minister without portfolio in Taiwan in October) had committed a fix, somehow found an email address of mine, and sent me a commit bit to the central SVN repo that contained the Pugs project (the Perl 6 implementation in Haskell she was working on), the source to various websites, the Perl 6 test suite, and a lot of other stuff around the ecosystem. Just because I reported a broken link.
It was the most inclusive community I've ever experienced.
They had written a custom web app to manage email-based invites to an SVN repo, where everybody with commit access could invite somebody else.
Today, we try to carry on her inclusiveness, and give just about everybody who wants write access to all repos of the perl6 organization on github. We have a github team with more than 220 members and about 40 repos, including the Perl 6 design documents, the official test suite, the perl6.org website and the official docs. I haven't heard of a single instance of vandalism or other misuse of the trust.
Want to contribute to Perl 6? Just tell me your github username!
I think the most interesting part of your story is that Taiwan now has a minister "placed in charge of helping government agencies communicate policy goals and managing information published by the government, both via digital means" who actually knows about computers! Something that seems as if it would never happen here.
Yeah, I'd be very surprised if anyone in congress could even spell IRC, let alone be an active contributing member of a programming community. It's a shame that the people who are making the biggest decisions about the legality and future of all this stuff largely have no idea what they're dealing with or have any clue how any of it works.
But that's true for literally every industry. Congress passes laws about healthcare even though very few of them are doctors. They pass laws about oil drilling and I doubt many of them have training in that. Etc, etc.
Congress is incompetent at everything. Its just that you only notice when they are talking about something you know about.
I remember you suddenly starting huge PRs with massive improvements to the OTRS code base. Although I don't use and contribute to OTRS that much anymore due to the increasing "Open Core" business model around it, your contributions taught me a lot and significantly improved my Perl and OTRS experience.
Similar here: At Nextcloud we are also radically inclusive and have never seen a case of vandalism.
Granted, there were few occurences of accidental pushes to master here and there. But that’s where git itself helps and also the recent addition of protected branches on Github is very useful. :)
If the community is open and these artificial barriers to entry are removed, then people are way more likely to get involved.
I don't want to sound too negative but doesn't this look like they were in need of volunteers even by risking the repo's integrity by allowing total strangers commit access from lack of commitment from the real core team?
It almost sounds like perl6 is a publically writeable repo.
What do you think the risk is? That some sociopath gains access to the perl6 repo and replaces all the code with a shellscript that "sudo rm -rf /"'s?
Sure, that's possible, but good contributors will vastly outnumber bad contributors, and such a person's commit access would be revoked instantly. There's still code review for everything, and such a change would not go unnoticed.
Just look at Wikipedia. That's the equivalent of a publically writeable repo and hasn't turned into a total shitshow as a result.
As the Perl 6 ecosystem demonstrates you get way more out of inclusiveness than being paranoid about some hypothetical bad actor getting access to your repo.
I don't think there is nearly as much code review in open source projects as people would like to think.
Then there is the problem of churn if the new committers get confident and rewrite minor things, which is quite taxing on the existing committers.
Wikipedia is a good example that a small number of people either have to guard their territory ferociously or else have everything rewritten.
Now, sometimes a rewrite is in order, but in general churn is a major time sink for longer term committers and in fact one of the reasons they ultimately leave.
> That some sociopath gains access to the perl6 repo and replaces all the code with a shellscript that "sudo rm -rf /"'s?
No, that some sociopath gains access to the perl6 repo and introduces subtle, plausibly deniable backdoors.
I have looked at Wikipedia. I saw that obvious vandalism, like inserting "HAHA PENIS" in the middle of an article about — I don't know — US presidential candidates, is detected quickly; sometimes it is even caught by filters before it can be saved. Fabricating citations about a fictitious Bhutanese author[1] or Aboriginal deity[2], on the other hand, can go unnoticed for months. This is exacerbated by the fact that your typical "counter-vandalism patrol" usually looks only for the former; even though most hoaxes aren't even especially sophisticated and fall apart after 15 minutes of research. But who can blame those people? It's the easiest way to score virtue points.
I don't know why some people are so okay with the most popular reference work of today being developed by a process basically equivalent to Twitch Plays Pokémon. It doesn't even work for the latter... and yet here we are.
The thing that your post misses and that most people who criticize Wikipedia miss, is that all stuff written everywhere could be wrong or malicious.
Any book, any blog, anything could be wrong by accident or on purpose. Wikipedia is open enough we can investigate it and readily determine who the malicious or incorrect actors are. Which is far more than any book from antiquity, most blogs and even much professional journalism provides.
Should we trust wikipedia implicitly? Of course not.
Should we trust any single source simplicity? Of course not.
When you view any page's history, all you get is a list of pseudonyms and IP addresses. Sure, you can probably get the country of origin and sometimes maybe even the employer from the latter, but that's basically it. I don't think that's very much, given that nobody on Wikipedia is required to disclose who they are in real life (there's even a policy specifically against posting such information about others[1]). Well, okay, in theory users with a financial conflict of interest in a topic are required to disclose it, but in practice PR agencies can often get away doing anything to articles about their clients.
And openness in itself doesn't guarantee reliability anyway. OpenSSL being open-source didn't prevent Heartbleed from happening. The so-called "Linus's law" is often a mere vacuous truth; there's rarely enough eyeballs to make even the most obvious bugs shallow. And eyeballs themselves don't help if they keep looking in the wrong places.
The problem with Wikipedia isn't merely that it may be sometimes inaccurate. It is that its sources of inaccuracies are much more numerous, most rather predictable, and yet next to nothing is done to address that. A book written by a single author may also be inaccurate; but I'd be more inclined to believe a book written by someone 0) whose good reputation is at stake and who therefore has an disincentive to actively misinform (sometimes at least), and 1) who has an editor above them watching out for inconsistencies; rather than a bunch of pseudonyms and numbers that can immediately publish anything they think up with basically no oversight. Trustworthiness is a spectrum, and Wikipedia does significantly worse on it than many other sources. Review processes on Wikipedia are overworked and even those focus mostly on style rather than substance. There are some relatively sensible policies enacted, but they are quite vague (and therefore bendable to ever-changing whims of particular users) and selectively enforced. And don't get me started with the dispute resolution processes...
There is no systematic, consistently applied process to ensure Wikipedia's accuracy. It is addressed just like the gun problem is in the US; it proceeds from one outrageous incident to another[2], with everyone barely applying ad-hoc measures after the fact, while the people who do have the power to make a real difference are either in complete denial that this is a problem at all or are too attached to the idea that the foundational laws can never ever be wrong to actually enact meaningful reforms.
There are new ways to make inaccuracies but there are also new ways to weed them out.
There are enforced rules about citing sources, the history of any page can be checked, there is discussion for just about any page, and more controls that don't exist on any book or news site. Errors are often fixed as soon as they are discovered, which is impossible for books and less common than it ought to be for web pages.
As for anonymity the author of most content may as well be anonymous. Other than the most famous authors or most learned readers the author of any given data is generally not of interest for practical use. For the domain expert this is of course not true, but the domain expert is generally publishing to a peer reviewed journal and reading from similar sources with an extreme eye on things like bias and sample size.
There are articles with no citations that have been lying there and accumulating dust for almost ten years.[0] I wouldn't call that "enforced". And when you bring the verifiability policy at deletion debates, sooner or later someone will inevitably say "the policy says the article has to be verifiABLE, not verifiED" or other such nonsense. Bring in enough users like that and the only sort-of-working cleanup process on Wikipedia grinds to a halt.
And let's forget that while verifiability policies require sources to be "reliable", this word isn't defined anywhere. Which makes the definition bendable to the whims of users at hand, as long as nobody else notices. The result is predictable.[1]
> the history of any page can be checked
Giving very little useful information, as I wrote before. And what does it matter if it can be checked? Who actually does it?
> there is discussion for just about any page
Oh yeah. Like the over 40 thousand words written over whether to capitalise a preposition, which only stopped after Randall Munroe drew a comic which called it out as an embarrassment to the project.[2] (And I don't even like the guy.) And no, it's not just a singular example. Disputes like this arise all of the time; Wikipedians have dutifully collected them into a list[3], from which of course they refuse to draw any conclusions.
The discussion pages are often a liability rather than an advantage. It's easy to understand why those prolonged discussions happen: one, because they tend to concentrate on colour-of-the-bike-shed issues like this, and two, because nobody has a final say on any issue. You win discussions on Wikipedia usually 0) by inviting your friends to support you, 1) by making passive-aggressive remarks just stingy enough to annoy your opponents, but not enough to constitute overt, actionable personal attacks, and 2) by playing silly word-games with policies to twist them to your will. (Of course, there rules do exist against all three, but see point 2.) This shall render your opponents outnumbered and/or too tired to argue any further, at which point you just wait for an administrator to declare consensus in your favour. Of course, it is easy to imagine what kind of people can afford to employ these tactics effectively: PR agencies and OCD sufferers with lots of free time.
> Errors are often fixed as soon as they are discovered
Which can be as late as 10 years after being introduced, and after careless writers have incorporated those errors into their own work, as happened with Jar'Edo Wens. Counter-vandalism patrol only looks for overt hooligans, not fraudsters. While traditional publishing makes errors harder to introduce in the first place.
> As for anonymity the author of most content may as well be anonymous. Other than the most famous authors or most learned readers the author of any given data is generally not of interest for practical use.
So page histories are useless after all? Is it not true that "we can investigate it and readily determine who the malicious or incorrect actors are"?
> It cites 240 sources, including peer reviewed journal Nature and panel discussions of specific domain experts.
I think you've mentioned sample sizes somewhere in your comment... didn't you? What does one article being ostensibly well-referenced prove? Can you trust the article to be actually supported by those sources? Never mind that single Nature study, which you're no doubt referring to, used rather questionable methodology and is seriously outdated at this point.
Well, we've been doing this since 2007 at least, likely longer, and haven't detected a single instance of such a behavior. That is a data point.
Also, if you don't want to sound too negative, maybe just leave out "from lack of commitment from the real core team". Your question makes sense without it, and that's what comprises much of the negativity in your comment.
Ha, Same thing happened to me with the SAME contributor!! Benjamin Bach is an awesome dude. He helped me maintain django-dbbackup for quite some time then we found a third contributor, Anthony Monthe, who is also very interested in the project and I would say owns it now. It's been maintained by those two for quite some time now. I wish there was a way for me to buy them both a bunch of rounds of beer. :D
He's a personal friend, i can buy him a beer on your behalf ;). I think he makes it work because he works part time and uses the rest of his time on open source projects. He could easily be earning far more if he worked full time, but this is what he prioritises.
Benjamin Bach seems to be awesome, however there are few people like him. Chances are your contributor won't be a Benjamin Bach. If the majority were like him, I'm sure we would have much less abandoned open source projects, but the majority isn't like him. Expect the worst and slight possibility of turning out like what the author described.
Haha, I'm so glad you shared that, that makes the story even better! I don't know Benjamin beyond our short interaction dealing with moving the repo around, but yeah he seems like an awesome dude.
I did the same thing..... and then the contributor took contributions from others that added security holes, removed cross platform support, and had what I consider to be low code quality. I've now disowned that project.
Therefore, I'm extremely hesitant to hand out "commit bit"s again. To make up for it I try and review PRs the day of submission; even then I don't get a lot of contributions.
There seems to be a inherent trade off between security+quality and how welcoming you are to new contributors.
On the other hand, I have taken over maintainership for some projects; but it was after months of steady contributions.
It's always a delicate balance, as there are obviously a fair number of people who don't know what they're doing (yet, if they're learning), as well as some malicious people.
There are ways to deal with that, e.g. force small patches that solve well-identified problems; use CI to test; involve projects' users in testing; and above all, to have other maintainers in the project who can catch such behavior and help you fix it. By yourself, you're rather vulnerable.
Ideally you have a core of 2-4 people you trust from previous projects, and then you can bring unknown people in little by little.
There is no tradeoff as such, but you do need to be more sophisticated than "here are full commit rights."
I'm not the parent, but I've been burned in the same way. In an ideal world, yes - that would be the right way to handle the situation. In a company you'd want to do that, and probably give the programmer a mentor who can do code reviews and whatnot.
Unfortunately thats:
- Thats a complicated conversation to have at the best of times. With volunteers over the internet its much harder to organise 1-on-1s.
- A huge time sink. Usually I could make the changes themselves in less time than it would take me to explain the changes to a junior developer. Its especially hard to justify for projects I have in maintenance mode, where I don't want to spend time working on project features at all. Long term hopefully the time investment will pay off, but they'll probably disappear long before then.
- Honestly, its also not that much fun. I contribute to opensource projects because I like making things. Managing people doesn't rub that itch.
I'm not saying I have a better solution. But faced with a stream of mediocre PRs from an enthusiastic new contributor that you don't have time to review, what do you do? Abandoning the project to the new contributor sounds pretty reasonable to me.
It was a project that I didn't give a lot of love, and hence just gave them commit rights. It was about a year later before I looked again, by which time they had done far too much work on top of their non-cross-platform bits for me to revert.
I still didn't use the project myself; and didn't care enough to fix it, so I just removed myself.
Same thing happened to me: I wrote the module that does Microsoft Office files in Python. At the time, the solutions for doing it in Python were 'talk to some Java library' or 'talk to some .net library'. I read the openxml spec and made something which did basic creation, extracting text, modifying docs.
I totally didn't have time to maintain it, then Scanny took over. It's now hugely popular, the code handles non-Word OpenXML docs, and most of the code has been added or refactored under Scanny.
> I don’t recall the author now, but the gist of the argument made was that we’re too protective of our code - if you give someone responsibility, show that you trust them, more often than not, your intuition about people abusing their freedom is way off.
Maybe it was some of Pieter Hintjens writing and/or the C4 process:
Felix and I met some time before he wrote that blog post, and I explained our C4 process to him. His pull request hack, which I merged into C4 not long after, is neat though github doesn't provide the tools to do it easily.
Was going to suggest the same thing :) I haven't had the sort of success that the OP did, but that PR Hack post def inspired my way of operating! Yay Felix
I did something similar with a project I lost interest in several years ago (https://github.com/dbfit/dbfit - table-oriented database unit testing). Just gave over commit rights, and there's a whole bunch of people writing code for it and maintaining it now, still going strong. I occasionally look at the github repo, and always end up amazed how much it's moved on.
We use DbFit every day at our shop. It's really changed how we work (as DB developers). Was funny that I thought of this project right after reading this article, and how I should probably try and contribute, then I see this in the comments. :)
nice :) I built DbFit at a time when I was getting a lot of work helping companies with a huge investment in oracle pl-sql, but my interests moved on, and for a while it just felt I was holding the project back.
People wanted to implement new things, support new databases, I felt overprotective of the design, but didn't have enough time to bring new contributors onboard. So I just kind of gave up, and that allowed me to think about it from a completely different perspective. If I just deleted it, then I wouldn't care too much about it any more. Giving away the keys was kind of the same, but without preventing others from contributing. And it worked out great.
Similar story - I wrote KDocker (https://git.girish.in/projects/kdocker). It was stuck to Qt3 and I had no motivation to move it to Qt4 since it was a lot of work. Out of blue, John comes along and does a full Qt4 port. I decided to take a chance and transferred ownership of the project to him. I trusted him only because he had already written a lot of code to improve the project. It turned out to be a great decision (or luck). I just checked how it's doing today and he is still keeping it alive after all these years! (https://github.com/user-none/KDocker). Thanks John!
I was introduced to open source software in 1996 and have been a huge advocate since. About five years ago, I decided to quit my job and, shortly thereafter, went out in search of a project I could contribute to in my spare time (which was now much more abundant).
I quickly discovered that a particular piece of software I used daily wasn't really being maintained all that well, despite the fact that there were several contributors listed on its home page. A few dozen bugs reports (some even including patches!) had come in, yet only a few had even been acknowledged.
This seemed like exactly the type of project I was looking for -- something that I used every day and could help improve. I sent an e-mail to the developers stating that I was going to be devoting some time to the project, sending patches in for the bugs, etc., and asking if there was "anything I should know" before jumping in. I spent a lot of time reviewing the previous discussions on the mailing lists, examining their previous commits to see how they did things, and so on.
One of the developers finally replied -- about a week later -- and asked for my SSH key. I sent it to him and he quickly gave me commit access, thanked me for the assistance, yadda yadda.
I closed out umpteen bugs, responded to previously unanswered messages on the mailing lists, and sent patches to the developers list asking for feedback before I committed anything. Having received no responses, eventually I committed all the changes and pushed them into the master branch. Questions that I had to the other developers went unanswered, just like all the previous bug reports and user-submitted patches had.
After a while, I just quit spending any time on it. Perhaps a month after my latest fix, I look at my e-mail one day to see a flood of notification e-mails about commits. The lead developer had reverted every single commit I had made. I was looking for -- expecting -- some explanation but never received one, even after I finally directly asked, "WTF did I do wrong!?".
Nothing. No response, to this day. I eventually said "fuck it", deleted my private key, unsubscribed from the lists, and moved on. I just looked and there are still bugs open today that were open back then.
The sad part is that this piece of software is fairly widespread and a part of all of the major Linux distributions, installed by default on many of them. I think there's been one, maybe two, "releases" in the last five years.
You really should tell us what the software is so we can know to avoid it or perhaps find your changes, fork the project and reapply them then push to have distorts switch to this new version.
Worked well for a number of projects where developers are jerks and don't allow other talented and dedicated contributors to participate.
Want to know it too. Maybe someone of us know the Dev, and could ask why?
Or even simply have the Option, someone to fork and maintain it well.
/Sorry for Bad engl., written at Phone. Be patience :-)
Wow, talk about some passive-aggressive behavior on the devs' part! I think there are hazards on projects where nothing seems to be happening, like you described, and others that are fairly active. At least with the active projects, you can usually get a feel for how you might be received by reading the mailing lists. I hope it didn't put you off looking for another project to help.
Yes! Rod Vagg (technical chair of Node.js) is one of the pioneers of this way of developing software. He was recently on the podcast "Request For Commits" discussing this very topic!
It was unmaintained on SourceForge for years. Some guys put the code on GitHub and started fixing its problems. I became involved after them (to help on the OSX side initially), then contacted the original author. He was happy to see some people continuing it's development, so he gave us admin on the SourceForge project. That let us cleanly redirect people to the new GitHub project, and it's been growing decently from there (~4,600 stars on GitHub, 3.5Mill+ downloads, etc).
What a small world, I worked with Greg at CashStar where he started the fork of django-money. I still remember the conversation we had in a meeting if we should fork the project or push our PR upstream to fix the issues we found. I'm glad our upstream PR started a trend to keep the library going. This is why I like open source, when you no longer need something you can hand it off to someone else who can take over, and let the code live on.
I did this for a (not very popular) Eclipse IDE autocomplete set for a certain Lua-based game engine.
I updated to 0.9.2d and forgot about it (stopped using the engine), and someone came in with a PR for 0.10.0. I made a couple of notes on the PR, and gave the guy commit access.
He cleaned it up, and has been rebuilding occasionally when new minor versions of the API definitions come in.
The vast majority of people are pretty nice. They aren't assholes. They're just people who have the same issues you had to build the project in the first place, and want to help.
Yep, something similar happened to me - I released what is now https://github.com/punker76/gong-wpf-dragdrop on google code in 2009 (I think) then pretty much lost interest. Forward on a few years and @punker76 had uploaded it to github, and continued working on it. It was a strange feeling when I first became aware of the fork - like seeing an inanimate object come to life.
Great story. My similar experience was in the mid 1990s. I had written a tool call PicWeb Viewer that would recursively walk directories and subdirectories containing photos, create thumbnail images, and generate simple HTML to navigate everything. My code was so simple but worked fine. I released the source code and over the next few years several people contributed modifications that made my simple bit of code awesome.
I've been using the Ratpoison window manager for more than ten years now, and I love it a lot.
Back in like 2003, I wanted to check out the CVS repository, probably because of some new feature, but the read-only CVS server at Savannah or whatever was down... so I asked in #ratpoison if someone could send me a tarball.
Instead, one of the authors and maintainers just gave me an account on the commit server, mostly out of laziness I think.
But it felt nice to be trusted. Loyal user ever since.
I offered to take over a library that provides a Pythonic interface for Redis a little while ago. The original author accepted, and I've really enjoyed bringing the project back to life.
I'll probably do something like that again - it would be great if there were a good list of projects that are looking for maintainers.
The Jazzband is a Python-focused group based around similar ideas: People can transfer their projects to the group, and anyone in the group can commit stuff and make releases.
I had submitted several PRs to a Zip streaming library. The author gave me direct commit access after this. I've helped maintain it ever since. I think it can really work out in some cases.
Amazing story! I maintain a network library on iOS where we have automated this process: a merged pull request earns the author an automatic invitation to join the project. The code is at https://github.com/Moya/Aeryn if anyone is interested.
I have an EMF+ library I started the barebones of in LibreOffice. [1]
Largely due to my depression and desperate need to get a job, I never got much past an initial read of the EMF+ files, but I'd love to see someone take my branch on git, split it into its own library and then work with someone to integrate it into the Drawing Layer. [2]
Oh, and to be honest it shouldn't be that hard to work on.
It basically works via a Command pattern. That's because an EMF is a binary file that consists of a bunch of sequential records. The way it works is that you get an EMF+ file, and you read it from start to finish. Each record you read has a consistent header, and each different type of record does something - or stores an object (such as a pen or font).
My big idea is to use a base EmfAction class that has a function Read() that you use to read each type of record. You use EmfAction::ReadEmfAction() to read the next record from an ifstream, and it moves the file pointer to the start of the next record, and you just loop through the file like this:
The only snag was how to set state on the DeviceContext - I want the commands to read and store the state, and the DeviceContext should be passed into the command object and be manipulated by it via an Execute(DeviceContext&). But at the time I concentrated on how to read the objects.
Of course, EMF+ has a way of saving a DC's state, allows the DC be modified by subsequent records, and then the DC can be restored, so a stack of DCs needs to be established, but how to do this I'm not rightly sure yet.
I'm pretty certain though that I'm going to need to change the EmfRecord derived command records to start using a constructor where the DeviceContext object is passed in as a pointer.
To offer a different perspective: I did this with a library that's mostly used for short lived hacks, so while there are contributions from time to time, no one sticks with it for long enough to become a real maintainer. So I'm still the one pressing the merge button on new PRs.
A long time ago I wrote PyMouse, which someone added keyboard support to, which became PyUserInput. But then I ended up being the maintainer of the new combined project. So lately I've just been giving contributors commit access. So far the results are OK, but it's not like anyone took over the maintenance.
Thats not how it works though. It's just called luck.
If 90 out of 100 people will do correct commits and not fuck things up in various ways (from inserting a backdoor to just breaking code), then you've 90% chance that 1 person you give commit access to will not do anything wrong.
Are you willing to bet your 10% bad luck on this though? What if its not 10%? How much chance, when 1 become 1 000?
Whilst this is an interesting story and shows how open source software can work it leaves me wondering how large corp's handle their increasing use of open source software.
Traditional corporate development is very controlled with everyone getting background checks on hiring and code reviews and change control and a load of other stuff around managing what software runs their business.
Into that mix corps are adding loads of software that's developed by people who they have no control over and no knowledge of their motivations, affiliations or skill level. Even if they did a one-time check of all the developers who had contributed to libriaries they use (which would be a huge task) there's nothing to stop any of the maintainers from handing over to someone else the next day.
It's even a change from using things like RHEL where there's at least a contract with another company that you could point at if things go wrong...
I've been involved in the Plone community for a number of years. All you need to do to get commit rights is to sign the copyright assignment and ask. We currently have 400 people with rights, I suspect we lost a few when we migrated from self-hosted SVN to GitHub.
In over ten years of this policy, we've only had one committer act in bad faith.
I was trying to find out how Perl 6 was progressing, and found a broken link on a related website (pugscode.org, now defunct). So I went into the #perl6 IRC channel to report it.
Within three minutes, Audrey Tang (to become minister without portfolio in Taiwan in October) had committed a fix, somehow found an email address of mine, and sent me a commit bit to the central SVN repo that contained the Pugs project (the Perl 6 implementation in Haskell she was working on), the source to various websites, the Perl 6 test suite, and a lot of other stuff around the ecosystem. Just because I reported a broken link.
It was the most inclusive community I've ever experienced.
They had written a custom web app to manage email-based invites to an SVN repo, where everybody with commit access could invite somebody else.
Today, we try to carry on her inclusiveness, and give just about everybody who wants write access to all repos of the perl6 organization on github. We have a github team with more than 220 members and about 40 repos, including the Perl 6 design documents, the official test suite, the perl6.org website and the official docs. I haven't heard of a single instance of vandalism or other misuse of the trust.
Want to contribute to Perl 6? Just tell me your github username!