I send money to 'the third world' a lot and it's extremely easy without having to resort to crypto nonsense. Hearing these implausible points repeated every time this question comes up just seems desperate now.
Seems like at that point exposing each query as an OpenAPI endpoint would achieve pretty much the same thing.
Then again having GraphQL as the definition for them is probably still not bad, I'll just have to write something that converts them to SQL::Abstract 2 trees once I get around to porting it to TS.
It would be the same thing except with Benje's approach, you're basically using GraphQL as a developer tool to create those end points instead of writing code to do it.
And you don't have to write something to convert them to SQL if you're using PostgreSQL, because Benje's already written it for you.
postgraphile does look like it'll handle basic cases pretty nicely but I've gone through the docs and didn't find anything like an explanation of what SQL queries it ends up mapping to - do you happen to know if there's one I missed, or a list of examples of GraphQL + corresponding SQL, or something?
PostGraphile compiles GraphQL directly to SQL. The SQL is "custom" in that it's specific to the GraphQL operation, though naturally it does follow rules. For example, Hasura does the same thing, and among the rules that it follows is that it uses `LEFT LATERAL JOIN` between tables (at least, on PostgreSQL). Full disclosure, I work for Hasura, so I'm not super familiar with the style of SQL PostGraphile generates but one thing you can do is just have PostGraphile report back the generated SQL for inspection:
> PostGraphile compiles GraphQL directly to SQL. The SQL is "custom" in that it's specific to the GraphQL operation
Yes, that's why I said GraphQL and -corresponding- SQL, I was hoping to find something that showed me the SQL for each of half a dozen or a dozen examples ... though the debug option there will let me point the out-of-the-box CLI at a pre-existing database and have a look at as many examples as I like, so that's pretty close to what I was looking for.
Would also be interested to see a bunch of examples of what Hasura generates if you have those to hand (I'm going to poke through the Hasura Community Edition docs but if you have the specific FM to R handy ... :)
no, those are completely different… a facebook account is an account on a social network that requires you to use your real name or else you get banned. a facebook account links you to your real life identity and your real life social circles
a meta account is an account that you can optionally link to their social medias, but it isn’t required
Because these aren’t dumb headsets that just plug into your PC. These are standalone headsets that handle the whole thing — they aren’t just a display and controllers, they take the role of the PC too.
So, needing an account for these is more comparable to needing an account to use Steam
My device is not signed into any Apple services right now or any Apple IDs. Anything my company needs to deploy gets pushed by MDM or via enterprise-signed applications.
I'm living in Uganda at the moment and have a couple of things to add:
1. VPNs still work fine (I'm using one right now, though its one I installed on a digitalocean instance - it's possible those receiving lots of traffic have now been blocked).
2. This tax is unaffordable for many Ugandans (it would be about 5% of the average wage here). Whatsapp is very popular here and this has impacted everybody I know. Most people I've talked to are using a free VPN (they aren't aware of the risks here).
I haven't done much analysis of the blocking yet but I will be doing a more in-depth post about this whole debacle in a few days.
Correct me if I'm wrong, but Uganda has a somewhat reasonable diversity of domestic ISP AS that are connected to other east African AS for international connectivity outbound via Kenya and Tanzania. It's not like a single government controlled bottleneck situation (As in Iran, where all ISPs have to be downstream of the government AS, which operates all of the international L2 transport and L3 transit/peering connections).
Have you tried Lantern? It’s free up to 500MBs per month and works well in many censoring countries around the world. It has lots of features that make it fast, such as automatically optimizing server selection and using BBR, and it does many things to stay unblocked, including the use of pluggable transports.
Full disclosure: I’m part of the team that builds it.
This is good to know about (and a great idea) - i'll take a look later. The other issue with using a VPN is people can no longer use the 'social pack' which is a cheap social media only data package available on all the networks.
You can also get a real quick exposure from the Modern::Perl module[1], which is by the author of the book mentioned in a sibling reply.
The basic idea is simple though. Use strict. Use warnings. Use new features and best practices modules (Task::Kensho[2] an help here) when them make sense (e.g. Moose/Moo). Be aware at least what PBP[3] is, and if you feel so inclined, use Perl::Tidy and/or Perl::Critic (even if your own defined subset of rules) to give yourself an idea of what is considered good practices (and when to throw them our for that elegant line or two, just don't forget to comment it so you don't confuse yourself when you see it months/years later). Basically, if you care about the code you write, you should find yourself gravitating towards writing Modern Perl anyway, and the speed at which it happens is largely determined by your exposure to the Perl community at large.
My own personal set of best practices means almost always using something like Function::Parameters or Kavorka, and possibly Moops.[4]
Not so much out of grace as much as (I think) it's fairly heavy for what it is and somewhat convoluted to configure and use.
Mojolicious and Dancer are the more modern equivalents, and both are frameworks in the vein of Ruby's Sinatra or Rails (probably more in-between). While you can run them as CGI/FastCGI, really nowadays I think most people just run them as standalone pure-Perl web servers (which makes it really easy to access and deal with every part of the request and routing), optionally with a proxying front-end such as Nginx or Apache to speed op static requests or handle SSL (but that's not needed, both are entirely capably of doing SSL encrypted servers in pure-perl with a few configuration params set).
Like the sibling comment, I prefer Mojolicious, and it's simple enough that the landing page for it[1] shows not just one but two fully functional standalone web applications with embedded webservers, one of 5 lines and one of 25 lines (both counts including blank lines). That may not help much with existing Catalyst apps, but I would be hard pressed to recommend Catalyst given how far the state of the art has advanced, but then again I've never really been a Catalyst user, so I may be unaware of a lot of its benefits.
Big monolithic web applications, which catalyst is optimised for have fallen out of favour. Catalyst helped solve many perl tooling problems, and it’s still a totally viable dev target. However as the author of the good catalyst book, I’d probably reach for mojo first in 2018.
The thing I like about catalyst is the same as the thing I like about git. It works well for the smallest use case and it scales well to the largest reasonable case.
In addition to the other comments made here ... Task::Catalyst is right there after Plack and before Template (TT2).
Rather than try to guess at the proper set of modules recommended by a subproject, if there was already a recommendation (ie a Task:: module) we tried to re-use that.