Hacker News new | past | comments | ask | show | jobs | submit login
The Railsification of SaaS (keithwhor.medium.com)
149 points by keithwhor on March 9, 2021 | hide | past | favorite | 60 comments



IMO the way we currently do APIs is backwards.

APIs should offer a way to run your own code in a sandbox, close to where the data is. That way, you don't have to worry about fine-grained or coarse-grained requests. All clients make just one request, supplying the code needed to retrieve the data.

Of course, the untrusted code should only be allowed to use a restricted interface, depending on the scopes/permissions.

It would be even better if the code could be statically analyzed, to determine the best SQL queries to perform.

This would require a pretty radical rethinking of our stacks, but would bring great benefits. APIs would be simpler to make, and webpages would load much faster. We could even give third parties access to very sensitive information, as long as their code would be constrained in what it could return. Think of a service that requires KYC/AML verification. Your government could give it access to all your private information, but allow the code to return a single true/false response to preserve privacy. Much better than the current method of sending ID scans everywhere and praying they never get leaked.


What you are describing used to be called "mobile information agents" in the late 90s ;)

Sandboxing technology wasn't really quite ready for that at the time time... (not sure if it really is now, though arguably WASM is closer than JVM was)

https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.54...


I completely forgot about information agent technology. It was supposed to be the Next Really Big Thing but seemed to fizzle out completely once the hype died down. Was it ever used for anything significant?


Nothing I’ve come across, though I feel one could make the case that many of the ideas live on in some shape (Web APIs, serverless/lambda, Etherium Solidity, Service oriented architectures...).


Should it be symmetric? Code + Data... either on the client or in the cloud. You'd just check which is bigger!


In production apps, the code would probably get transferred to the server at compilation/bundling/release time. The cluent would only need to send some kind of hash that uniquely identifies the function to call, as well as supply the arguments.


I find this trend really interesting, and agree that its one of the potential futures of personal computing.

I've heard, on HN, people lamenting that an early vision of computing –– people creating their own applications –– has not arisen. Most people, outside of CS, have no idea how computers work.

I think this post, the no-code movement, and products like Zapier show that there is a future for this. People don't need to know how computers work to create their own services – SaaS applications abstract over the implementation details, and APIs provide the interfaces for prosumers to connect them and build their own solutions.

I'd be curious to hear your thoughts on Zapier. How do you think we can go beyond "API connector" products?


API connector products are pretty much ubiquitous and that's the biggest sign, to me, that there needs to be centralization and consolidation in the market. Zapier, Mulesoft, Integromat, IFTTT, Automate.io, [...]. And then every at-scale SaaS product ends up building their own automation platform internally. The list goes on and on. In a lot of ways it's an extremely fragmented market.

Every single one of these products starts out with the same epiphany we had, which is some variation of, "there needs to be an API for APIs...". People keep putting a band-aid when what's really needed is an architectural refactor of how we deliver and connect APIs, full-stop. The problem is there's very little market incentive to standardize how API integration works because it requires the complex coordination of thousands of companies and millions of developers. The barrier to transformation is extremely high, so the catalyzing agent has to be spectacular. A product (or set of products) needs to come along that completely changes the game and creates a real incentive to standardize and adopt a common integration scheme and format.

The worst-case scenario for Autocode, as I see it, is becoming just another integration tool for some specific vertical. What we're striving to deliver is an open ecosystem for integrations that's accessible to non-developers who are going to derive the most immediate value from it as an introduction to development, but attractive to professionals as a development target as well. That's an extremely difficult balance to achieve but the rewards are just as large as the problem, technically the entire industry could be made more efficient at scale.


Hi Keith, I still remember when Autocode was called Stdlib. I liked the old name a lot better :)

For us at Hypermachine, we are tackling the problem by upgrading an existing low code platform - VB6/VBA. Our hypothesis is that pure no-code platforms are too limiting and inflexible for anything outside of the predefined connectors/widgets/use cases. There is a still a large gap between "call any arbitrary API and program" and "build a CRUD tool quickly". It is also a lot easier to convince users to use a platform that has been tried and tested for having a low barrier of entry than trying to teach full stack web development from scratch.


My guess is the devil is in the details. And the more powerful and flexible the more consensus would be needed. Which can slow innovation.

Still it kind of echos back to the semantic web. That could be a foundation to empower users and their agents to consume and produce in a more tool agnostic way.


Check out OpenAPI. It's more than just automatic documentation and mocking/testing.

Clients can write code to the OpenAPI spec and not necessarily worry about underlying changes to the target API unless they are significant (in which case they are likely versioned anyway).


There are a lot of problems with OpenAPI that are more pragmatic than ideological. “If we build to this spec, we get all this for free!” is the dream. It’s not the reality.

You’re a large tech company. Your front-end of engineers want to build with GraphQL. Your legacy API is 80% REST with some SOAP endpoints strewn about. Who gets saddled with the job of building to the OpenAPI spec and why? Where is it creating customer value? You introduce it and then what, what value did you actually unlock from your customer ecosystem? I’m not saying that it doesn’t create value — but which stakeholders can you explain that to and how long does it take to convince them?

On the other hand, if you start a new company, you’ll find that your API evolves organically. Locking yourself into a specification ahead of time is like writing all the unit and integration tests before a product ever gets in customer hands. Does anybody even want this? Are we even going to expose this API? What if we need to change it?

OpenAPI isn’t bad — quite the opposite, it’s fantastic. When we encounter an API that adheres to the OpenAPI spec it makes our lives a lot easier.

The problem is what I outlined above: for OpenAPI to permeate the market there has to be some sort incentive so powerful that it spontaneously aligns thousands of companies and millions of developers to accept it as the One True Specification. That’s just never going to happen.

So “standardization” can only come from something that creates an economic incentive so great that everybody agrees to it without hesitation. Which is a Herculean effort; it requires something like a product that’s never existed before, or an entirely new class of web developer. When you start thinking down that path you start seeing through our eyes at Autocode. I’m not saying we have the exact right answer today, but I’m saying this is how we think about the space.


It is hard to make anything significant without that sort of background though. If your business actually is software, and not just a physical business built on top of software, it's hard to see how you're going to make much money by just gluing canned functions together. Either your own ignorance will do you in, or you'll have to hire someone who actually does understand computer science.


To add to that: if your entire company is "an idea + a week of learning and combining zapier and ITT" it has no sustainable business model.

It may be a neat way to get an MVC or demo out. It may even be the seed to grow from. But you will need to grow beyond this really fast.

Everyone can steal your idea now. And if you can learn+build something in zapier in a week, so can anyone: there is no competitive advantage there at all.

Which does not mean that your product is no good, just that there is no business-model: the product is basically " free as in beer".

The only businessmodel that I can think to service this, is one that makes the "marketing/brand" the competitive advantage. Sell a brand, instead of a product.

I, personally, very much dislike these kind of "products", because I believe they hardly are one.

Ninja-edit: businessmodel, not development model.


Building something customers want is a competitive advantage, whether you do that with Python or JavaScript or a low-code tool is irrelevant unless the problem domain specifically requires said tool.


I believe your statement misses a crucial condition:

> Building something customers want, but cannot or should not build themselves, is a competitive advantage,...

With which I want to stress the important part from my comment: it certainly makes sense to build something as quick as possible to assert the market. But if you cannot move beyond that, fast, your business model is risky.

I think in early stages, for example, an "online bike sharing service" that consists of only an excel sheet and a mailbox is by far preferable over a web-app, with stripe integration and a blockchain, microservices or kubernetes cluster or whatever overengineerd paradigm is hot this month.

But if your entire "bike sharing service" is as easy as "getting a mailserver and filling an excel sheet", your business is extremely easy to copy. Anyone with a bigger marketing budget will overtake you in days. Anyone with a larger reach can push you out of business in mere days.

At some point, you'll need to get a competitive advantage, which cannot be copied in hours, or bought with spare change of any of your competitors. Whether that is a full software-suite or kubernetes-hosted microservices with blockchain integration (I hope not) or just a large happy customer base, matters little.

I'm not saying "don't do no/low-code", I'm only putting forward how business might change when anyone can build an online business in mere hours. I'm not a historian, but I certain there are numerous examples of existing fields and niches changing or dissapearing because innovation removed the barrier to entry and anyone could copy the business in hours, that would previously cost months or huge loans to get off the ground.


Most of the internet is hastily glued together WordPress pages. People make a lot of money from hastily glued together solutions. In fact, people are making more money from hastily glued together solutions than ever before thanks to companies like Shopify and Stripe, that’s kind of the beauty of it.


A toast to that. Consumers generally dont care about how bad our code is as long as it solves their problem. If theres a price, all the better for it.


Seems like the technical side of vertically integrated saas opportunities, which is mostly where things are going. Weve more or less built the parts bins. Now the remaining growth areas are in highly specialized, specific verticals. The issue I see is actually more to do with moving the data vs moving the computing. We currently perform all these integrations by moving the data around then taking 1% of it to do something useful. I wonder if offering serverless in-situ compute and assembling the results would be easier.


It would be neat to use a sort of reverse-holomorpic encryption, where the client could send the server a computation to perform on some data the client had access to then return the results, but the server cannot see exactly what the computation is, stepwise, so at to protect the client's internal business logic.


There were a bunch of CS theses starting in the late 90s along those lines, some of the general labels used were 'trustless computing' and 'fog computing' or something similar. Other keywords are 'enclave' and 'opaque'. David Brin also promoted 'translucent databases' as a way to allow computations on aggregate data without the possibility of revealing individual rows. Intel has a related technology platform called SGX (Software Guard Extensions) in this space that is aimed at being able to run your own trusted code on infra owned and operated by potentially malicious providers.

On the activist front, Doc Searls has been pushing a marketing/commerce twist on the idea of user-owned and controlled data under the labels 'Project VRM' (Vendor Relationship Management) and 'The Intention Economy' but without much traction.


Fascinating; Do you have any links you could suggest?


Thanks for something truly thought provoking this evening.


My issue with these no-code/low-code things is bootstrapping. I can work on code for free on my computer, and then host it for $5 on a VPS/month.

With autocode or retool I'm now dealing with $20 or $10/user or lots more if I need any usage. That's not throw-away money anymore...


Assuming a single free development account, I'd happily pay $10 or $20 per additional user if it means I can find a concept with decent product-market fit. I can then build it more cheaply myself.

Iteration speed is incredibly valuable. You often only need <10 users to get an idea about how useful your MVP is.


Are you interested in something which lets you host the application out, once it's built, yet providing the option to host it on your behalf?


I'm not quite sure what you mean. Hosted option is great - it's just the free tier seems so limited to me.

I get these guys want to make money - need to make money to support the service. As a "hacker" I want to mess around and try things, and these seem to cater to that... until you see the price.


:)


> Railsification of SaaS

> SaaSification of Rails

> RubyGems of SaaS APIs

If all you have is Rails, the world looks like a Rails analogy.


Hey all, I wrote this to talk a little about how I think about the future of software development in the next decade; I'm happy to chat about it if anybody has any questions or additional thoughts. :)


Do you have a way to migrate Zapier zaps? I feel like since that is the dominant ecosystem, having a way to easily port Zaps would be very beneficial. Building one of these things correctly is significant work, and it's hard to justify the investment in new platforms.


Not currently. We don't currently see a lot of migration from Zapier, but we do see a lot of net-new development. It's definitely something we've thought about but we have just as many users using us for general purpose scripting and webhooks as API automation.


How do you square this:

> You can’t share a Zapier integration universally.

With then plugging Autocode? How is an Autocode integration any more universal than a Zapier one?

Also, how are you building the code for the various integrations Autocode supports? Is there some underlying interlingua that you codegen from?


I'll show you. Here's an example of an Autocode "integration," or something in our standard library. It's a Halo API I added for fun. You can use the gamertag "citizen kwho" to find me.

https://autocode.com/lib/halo/mcc/

As you can see from the code examples, you just talk to it via an API key and a standard calling convention. It's completely portable. Autocode makes it easier to manage your tokens, code, etc. but you could port this to any Node.js codebase or make an HTTP request from anywhere. We have universal Ruby and Python libraries, too, but they're a little outdated. (Small team, have had to stay focused on Node.js for now.)

Edit: I'll mention that technically anybody can add to our stdlib. We do a lot of the work ourselves right now, but it's fundamentally an open platform. Also see other comment re: FunctionScript specification and how we add APIs to the stdlib. :)


Is there a place we can see the source for that Halo API binding?


Not specifically for that API. I'd have to spend some time cleaning it up to open source it. But we have a really simple API connector example you can fork [0]. :)

[0] https://autocode.com/app/keith/connector-pokefusion/


Unfortunately Autocode doesn’t use graphql


You're half-right - we have a specification called FunctionScript [0] that can act as what's basically a universal translation layer for any API. You just generate a proxy to it using FunctionScript. So it's possible to add GraphQL APIs to our stdlib, it's just not convenient right now. All of Shopify is a GQL API, for example.

The reason we don't support GQL APIs natively is because we designed Autocode to follow the best practices of the top API companies we knew; Twilio and Stripe. Both companies designed their SDKs in a way that feels natural: a call to their API is achieved via a namespaced function call -- it's very Railslike. Everything available in our stdlib follows that convention as we felt it was the most intuitive approach to API / SDK design.

[0] https://github.com/FunctionScript/FunctionScript


I think we should try to encode types in a common notation and create a gobal package manager for them. Then get people to actually use the types and you'll get much closer to automating API connectivity. I think Tree Notation and a yet to be developed package manager based on distributed principles would be a good start.


If you’re interested in this we should chat. Google and Facebook effectively pulled this off for the web (OpenGraph and Semantic Search), but it has yet to be done for APIs.

FunctionScript has a basic type system that’s a superset of JSON but there should be an overarching semantic system to it all. Feel free to e-mail me, keith@ (company domain) and introduce yourself if you’d like to chat about it. :)

https://github.com/FunctionScript/FunctionScript/


I think to truly get the interoperability you see in sci-fi you're gonna need to upend the large stack we currently build on and reconsider what we want to do with our software and computers. Adding APIs to everything is great for today's world, but all you're doing is gluing things together still. As we get less from physical chips, we'll have to get more efficient in computation, which means more standardized data formats from the start, less security concerns by using capabilities and a linked permissions system, remote computation rather than data retrieval, and things like that.

I think you have to take a several pronged approach, because even then you can't solve a problem like this with tech alone, you need other things like education, collaboration, freedom, and protection from maligned interests. That's really hard to do, I think maybe SpaceX did something on a similar level for space and soon internet connections, but they don't have free and open tech due to profit motives, so it's not advancing the state of space tech overall, just the tech at SpaceX.

Probably, you need a large nonprofit foundation with billions of dollars to just buy it's way into the market to get something grand like that adopted.

I might email you, not sure I have the motivation for API building and chatting about APIs that you seem to, but I do like generalizing problems into the simplest forms.


As a common notation, GraphQL is a good candidate and my favorite one. With a small set of conventions on top for pagination filtering and ordering.


> I think we should try to encode types in a common notation...

This sounds similar to linked data formats like JSON-LD


> In the 1990s we saw the mass industrialization of web development with PHP and Apache HTTP Server.

This isn't really true. PHP4 wasn't released until 2000. Sure, OK, while there were people using PHP3 in the 90s, it wasn't nearly as ubiquitous to the point of "mass industrialization". PHP3 wasn't even much of a language to begin with and if anything, Java/J2EE was the stack used for "mass industrialization" (Java's role is still dominant in most domains today). Perl was also a much more popular scripting language at the time.


I had originally written this as a much longer piece — probably double the length — where I covered the history including CGI / Perl but edited for brevity.

I never had experience with J2EE personally which is why it’s missing. I was 12 in the year 2000 and had just started web development — because of that PHP retains a sense of primacy in my mind. Forgive the bias :).


would love to read the original piece if you had no objection to sharing


It's lost to the void of lack of version control on Medium. I'd consider writing a new piece in the future, will let you know when I get to it!


ASP and ColdFusion, too. We had clients switch to PHP 3 because it was so much faster and less buggy (!) but that was definitely seen as a bit of a gamble.


Completely agree. When I started un the late 90's and early 2000's, it was mainly CGI/Perl. PHP was seen as very amateurish, and Python didn't have the massive ecosystem it has now (Zope sucked). And that wasn't really industrialized, mainly lots of ad-hoc internal frameworks.


He was probably thinking Perl


Other than nitpicking the terminology/analogies used, I find myself in broad agreement with this post, although I would have liked a bit more analysis (even just backward looking) in terms of the market dynamics (eg. why PaaS lost it's early momentum and we backslid to assembling custom platforms out of IaaS components and services, the cautionary tale that OpenStack presents to efforts like OpenAPI, etc.), but I understand why not revealing one's thoughts on strategy might be the smarter choice.

OTOH, I really would have expected Heroku to be mentioned in the 'SaaSification of Rails' section (since Heroku is literally 'Rails as a Service'), if only to state that it wasn't the intended meaning of the phrase (although Heroku's add-on marketplace can be seen as a stab in that direction).


PaaS lost because big corps are usually complex. Heroku is not enough for them. AWS got big due to it providing the services/components/building-blocks for big corps.

Why talk only about big corps? Because that's where the money is unfortunately.


Is this an old post? I thought this happened about 10 years ago.


Why the switch from stdlib.com to auto code.com? The first name was much cooler


stdlib means something to you and me, but to a broader audience it may suggest that it’s where you might go to learn more about chlamydia.


sharelatex.com (a LaTeX collaboration site) shared the same issue.


Oof, I didn't even think of that


... this too, though most people we met were polite enough to not argue this point too aggressively. :)


I mean, it's too bad: I like the name stdlib.com.


We couldn't find a commercial model for stdlib.com alone; which means we couldn't support a team to build it. We learned that we needed to be able to provide significant value upfront instead of asking developers to rely on an unproven standardization scheme.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: