Hacker News new | past | comments | ask | show | jobs | submit login
Next.js – A small framework for server-rendered universal JavaScript apps (zeit.co)
618 points by montogeek on Oct 25, 2016 | hide | past | favorite | 162 comments



Interesting! I really like the architecture here. I think the next major opportunity for abstraction is all the server/client detection you still have to do. Do I want `request.headers['Cookie']` (server), or `document.cookie` (client)? Do I want to create a fresh Redux store (server), or hook into a global one (client)?

It's definitely not hard for community members to build these abstractions themselves (`cookies = (req) => req ? req.headers['Cookie'] : document.cookie`), but some of these are going to play into major use cases like authentication, so, as Next matures, it'll start to make sense to provide these especially common abstractions out of the box.

That said, these are next steps; the first release is all about showcasing the fundamental architecture, and it's looking gooood :D


That is something that react-server[0] is doing reasonably well, though some of the abstractions I would like OOTB are missing currently

[0] http://react-server.io/


and Electrode.io is exactly you need OOTB.


Maybe I'm misunderstanding but I don't think universal apps will share so much code with the client. It's just the view layer that is shared (React components in this case). You wouldn't make changes on the server side by piping them through Redux, you'd just use your regular code paths to read the JSON data from the client and send an update query to the database.


You may be interested in Opa (http://opalang.org). I'm unsure how well it abstracts away client/server detection, but from what I understand about it, you have one codebase for both your client and server. I've never used it however, so I can't vouch for it entirely, but it certainly looks interesting.


If someone wants to solve this problem and wants inspiration, look into how Unreal Engine's scripting/networking architecture works. Gamedev and webdev can be very similar at times.


Interesting! Might you be able to elaborate?


One thing that is never explained in these universal/isomorphic react frameworks/boilerplates is how to make a fetch() against your own server (e.g. a relative route). There's this https://zeit.co/blog/next#data-fetching-is-up-to-the-develop...

But what if my component looks like this:

  import React from 'react'
  export class exports React.Component {
    static async getInitialProps () {
      const res = await fetch('/api/user/123')
      const data = await res.json()
      return { username: data.profile.username }
    }
  }


I like how react-server[0] deals with it; by having the `when` and `listen` callbacks expecting a promise, it becomes simple to `fetch()` any data you want server-side, including relative routes

[0] http://react-server.io/


is this your project or something? you seem to be name dropping it a few times in this thread.


Nope! It's just been an excellent tool that I'm a pretty big fan of, and relevant to a couple comments in this thread


Electrode.io is also new by WalmartLabs which I have not try yet.


Don't fetch within a component. Write a helper function: getUser(). It should decide which method to use to retrieve data.


How would it decide? What does the server-side version do since it won't have access to the req or res objects at this level of the code? If the /api/user/:id route is restricted based on the cookies that get sent, how/where do you validate that the user is allowed to get data from that route when rendering server-side?


That would be the responsibility of the route code, not the code doing the fetch().


Why would the route code need to know what data a deeply nested component needs? Making that the responsibility of the route code entirely defeats the purpose of an isomorphic/universal React application.


Yes, exactly! I've basically done only IT for the past 10 years and what little experience I have in web dev was with PHP in the good ol' days.

I want to get started with vue.js but there is NO mention of how to get data from a database. It is assumed the reader knows exactly what to do.

What you described is basically how I imagined it would work but I've just not been in this world for so long that I haven't been able to find the answer. (How to get data from a database with a system like vue.js.)


You're using the wrong tool for the job. Front-end JS frameworks do not mention how to get data from a database, because it's completely out of scope. Take a look at server frameworks/libraries like Rails, Phoenix, Express etc.


Will do thanks.


Super simple answer, so apologies if it sounds condescending or I've glossed over aspects entirely.

Libs like vue & react are essentially ui renderers, they create the html representation of app state, and handle all the DOM event binding and whatnot for you.

You will likely have some kind of client state management to take care of telling your ui to update, but essentially you will push json into that 'store' and that will then be used by your framework of choice to populate the page with content.

How that json gets there is largely upto you, and will be influenced by what you use to manage state, however you will need to handle getting data from the db server-side. Php or rails are popular, with reams of great (and some not so great) resources out there, but realistically you could use whatever you wanted.

Node.js is nice and could be useful if you want to focus on 1 language initially.


It wasn't condescending at all. On the contrary.

Partly I was tripped up because I have used Meteor and they handle everything for you. Before that I was using PHP and making SQL queries to MySQL directly.

In both of those cases I was able to get directly at the database. What you're saying is that I need a middle-tier that acts as the layer above the database but below the client (obviously.)

You mentioned Node.js. I know what that is and I know it's not a database. So it needs to connect to a database and then pass off data to the client. This is the piece of the puzzle I'm missing in my mental visualization.

I've learned that Laravel works well with Vue so I'll start there.

Thanks!


You can just wrap `fetch` with something that knows where your server is. In `Este.js` there's something like:

    function ensureServerUrl(url) {
      // parse and replace with your server url
      // in Este you read from a SERVER_URL environment variable
    }

    function fetch(url, options) {
      url = ensureServerUrl(url);
      return fetch(url, options);
    }


If the route requires a cookie to work how is that set when performing the fetch server-side?


Grab all the request cookies from the user and pass them along.


That sounds like a recipe for a confused deputy (https://en.wikipedia.org/wiki/Confused_deputy_problem).


All I know about confused deputy is from the Wikipedia article, but it seems like it wouldn't apply in this case. From Wikipedia: "The confused deputy problem occurs when the designation of an object is passed from one program to another, and the associated permission changes unintentionally, without any explicit action by either party. It is insidious because neither party did anything explicit to change the authority."

In the case the GP describes, the proxying web server has no additional authority on the API server: if the API route requires a cookie from the user, it doesn't matter whether that's passed directly or proxied.

That being said, feel free to correct me if I'm missing something. Also, thank you very much for giving me the name for this problem, it will come in very handy.


Inside the React component, (wherever this fetch() is being called from), where exactly are the user's cookies accessible from at this point in the code?


You can use a full URL? Also, typically you want to query a different API server. We query microservices, specifically.


I'd guess you need to use an absolute route (using a variable passed to the component props?). But yeah in practice you'd probably not want to slow down the rendering by doing an extra HTTP request so you'd probably send the data to the component in a more synchronous way.


> But yeah in practice you'd probably not want to slow down the rendering by doing an extra HTTP request so you'd probably send the data to the component in a more synchronous way.

How would you do that without breaking the whole universal/isomorphic aspect of this?


Check my demo using Redux: I populate the "initialState" on the server side [1] before React comes into the picture, and then populate it again on the client side [2], then client-side Redux can do its own thing.

Some explanation at [3] although that writeup (and this demo code) needs updating.

[1] https://github.com/firasd/react-server-tutorial/blob/master/...

[2] https://github.com/firasd/react-server-tutorial/blob/master/...

[3] https://medium.com/@firasd/quick-start-tutorial-universal-re...


But that means that I need to know which express routes are going to need what data. The react components should be able to declare the data they need and the server should be able to fetch that data before attempting to render server-side. There's not much of a point if I have to know ahead of time what data any react component rendered by a route will need.


I'm not sure 'components declaring the data they need' is a necessary React idiom especially if you're using Flux. But I do understand that any large application will need 'partial' state and you can't fill out the entire app data for every request like I do in my smaller demo apps.


You populate redux with all the data your app needs (if you're using setState to store state fetched from APIs on components, stop). Then you simply render your app in one go.


Okay, let's say I put the actual fetch() inside a redux "action creator" how am I fetching that data server-side? How does some arbitrary route know that a nested component needs to fetch() and then populate your store before rendering? And remember, the fetch() in the "action creator" is still trying to hit a relative route.


you call the action creator and dispatch its results to you redux store before rendering. For "arbitrary" routes, you design your application in a way that just by the pathname, query parameters, etc, you know exactly what data to get for the whole component tree. ..Don't use a relative route.


> For "arbitrary" routes, you design your application in a way that just by the pathname, query parameters, etc, you know exactly what data to get for the whole component tree.

That entirely defeats the purpose of an isomorphic/universal React application.


The split screen demo on the linked page does an outstanding job of showing how this works. Nice work.


Yeah, it's like the best 30 seconds demo I have ever seen.


I like how its all live typed (or seems to) with the occasional back-space and everything =)


I'm a little confused about the benefits of server side rendering. I thought the point of these js UI frameworks was to make the client do a bunch of the work? Can anyone give me some of the upsides? Thanks!


Two simple practical benefits: (1) Performance for first page load. (2) SEO.

Also because of network latency sometimes it's faster to just reload the whole page than to send a few async requests and wait for them to return while the user sits there wondering what's going on.


classic SPA: (client request) -> (retrieve empty divs + js) -> (js kicks off) -> (API results used to fill initial page content) -> (further page interaction calls APIs to mutate client side page)

SSR: (client request) -> (server retrieves empty divs + js, API results used to fill initial page content) -> (fully rendered first page view + js delivered to client) -> (further page interaction calls APIs to mutate client side page)


this sounds great, but isnt the point of a js web app that the data resides or are cached on the client !? Making it possible to use offline. And have real time push via websocket when online !? Server side rendering frameworks are as old as it gets.


The point of a js web app is that you can run it in a web browser. Hammers and nails are even older than server side rendering, yet somehow they still function perfectly.


There are advantages to not coupling the client and the server. Like for example e-mail and IRC-chat. You can have a browser client, but also a native mobile app client, bot running against the same server API. Decoupled software almost always means more productivity.

Hammers and nails equivalent is cgi/perl/php/asp and they still do a fine job. You can use the screw machine to manually punch down screws, but then why not use the hammer and nails instead, as you do not see the advantage of the screw machine.


Yeah those would just be separate endpoints/requests. text/html vs application/json. Instead of serving a static html file that bootstraps your react app, you're serving a rendered component.

Not really any different from building say a PHP app that serves an API to a mobile app.


If the user has JavaScript turned off, the whole point of a JavaScript app is lost. You could just render all the HTML on the server like a PHP app, witch would also make Reac pointless.


So from what I understand, the benefit of these single-page-app frameworks is not that it makes the client do all the work, but that the user experience is more fluid (aka no full page loading and blinking). Doing 'work' on the server side will always be faster resource wise. This project tries to straddle both benefits by injecting server-rendered sub-components into a single page app. P cool if you ask me!

Keep reading past: "## Automatic server rendering and code splitting" & "## Anticipation is the key to performance"

Each sub-component is loaded dynamically which speeds up the initial load time but still allows for the 'flow' of single page apps.

- "For www.zeit.co we've implemented a technique on top of Next.js that brings us the best of both worlds: every single <Link /> tag pre-fetches the component's JSON representation on the background, via a ServiceWorker."


server side rendering only on the first load / if the client has disabled js. If the client has js enabled, after the first load, the app works as a js spa


One benefit is less roundtrips to actually provide meaningful content to the user.

As another comment noted, a typical SPA just delivers a bunch of script, link and empty div tags to be populated after JS loads and runs on the client. With SSR you can at least provide some meaningful content for the user to experience while the rest of the JS loads and improves the existing experience instead of providing the entire experience.


Server-side rendering generally has quite a few upsides:

* Simplicity. You don't need a fancy bunch of tools to cross-compile things, or any big frameworks, because a bunch of problems (browser compatibility especially) disappear. No more random people's browsers. You control every variable, and you can far more easily accurately test the end result.

* Shared data between requests. You can easily cache requests to backend services for multiple users, or even cache the entire rendered outputs of pages or parts of pages. Instead of every one of your users hitting a backend service from their browser and waiting for the result, you can hit it once every X, and then every other page load becomes just a cache read. In many simple cases you can have dynamically generated page content that's served up and readable after a single cache read, and update the cache totally async on the server too. Super fast.

* Data transfer, sometimes. If your end page is small but you use a lot of JS to render it, and your users load new pages more often than they reload content inside an already open page, then you might well end up transferring less data with a server-side approach (although this is very case specific).

* Page rendering times. If you use JS frameworks badly browers will really struggle, especially on mobile. Even with React you can really shoot yourself in the foot: https://github.com/reddit/reddit-mobile/issues/247. Obviously you shouldn't use JS frameworks badly, but people do. Static HTML + CSS reliable renders very very fast, everywhere.

* SEO. Google now has a degree of ability to run JavaScript (not unlimited, but fairly good), but nobody else seems to. Google is about 80% of US search traffic, so if any content you render client-side only will get about 20% less search traffic than it would otherwise.

* Accessibility. Some screen reading tools etc aren't great at JavaScript (although this is improving). More generally all sorts of other tools (site scrapers, sentiment analysis bots, whatever) have to put in way more effort to read your content.

* Network resilience - if you serve up only half an HTML page, or a couple of your external resources get dropped, your end user probably gets something sensible-ish. If you drop a JS file you depend on for rendering, it's all over. Offline with service workers is a good argument for progressive enhancement on top of this though: JS can also improve later network resilience drastically.

That's server-side rendering generally - the comparison gets more complicated and you lose some of this (simplicity especially) if you do isomorphic rendering (as here I think), where you render the page on the server and the client too.


The second point "Shared data between requests" would actually turn into a negative when it comes to dynamic web apps; if you had to re-render a component in realtime and send the HTML for that component over the wire for every permutation and change of data (which will be different for each user), caching would be futile. Also, this will basically move cache consumption from the client to the server; which is worse for the company which has to pay for hosting.

Performance-wise, full server-side rendering is a huge step back. For dynamic web apps, next.js is going to drive up hosting costs massively and open you up to DDoS attacks because of server-side cache issues. I suppose it's OK for static websites.


Also, if the JS fails to run on the client (which you have no control over, and which is pretty common), the page still loads.


> and which is pretty common

s/common/uncommon


Happens all the time if you're travelling and your mobile data connection is going in and out.


i'm not sure i follow your logic. how does a poor data connection affect js execution?

if you have to do additional template fetching via follow-up async requests, sure. but if you serve the js & js-parsed/executed template in the initial request, then the only difference would be a synchronous reflow/paint for js (white page flash) vs streamed/incremental reflow/repaint.


If you can barely get 1 request through, it might take a long time before the JS will load and run, as it has to load completely before running at all.


A poor data connection would affect the js download, not the execution per se.


noscript is not uncommon


maybe amongst HN (i swear by uMatrix & uBlock Origin), but the general public prefers their favorite websites not to be largely broken.

completely disabling scripting [rather that just disabling third-party script injection] is not a pleasant experience, even for the technical crowd. i already reluctantly have to whitelist CDNs which can track me across the internet.


We will eventually go full server rendering and just stream every frame to a thin client. We have good enough network now with gbe consumer bandwith and less then 1ms latency up to 50km. But we need more powerful, smaller and cheaper servers.


>We will eventually go full server rendering and just stream every frame to a thin client.

You just described every single non-SPA...


No, I mean more like a remote desktop experience.


How does that make economical sense?

I can pay to render everything for my user, essentially streaming their webpage to them. Or for the potential cost of a slightly slower page load I can offset the cost of all that rendering to their xGHz multi core CPU that's likely sitting idle.

Not to mention the cacheability of the data.


Because bloating your client's experience has negative effects too: lower battery life, lower performance, higher memory usage, ...


It would make digital rights management (DRM) easier. Companies already spend a lot of money on DRM. There are already services today that lets you use a remote desktop for computer-aided design (CAD) work or gaming. But as you said, it's not economical yet.


Latency is certainly not 1 ms for a video frame in desktop resolution. It also means lower visual quality due to the fact that video compression would be needed to make it even slightly possible. Also, how many people actually have a GbE connection to the internet?


The 1ms is network overhead, witch is nothing considering how long it takes for the image to render on the screen. The CPU and the GPU could be several miles apart.


> The 1ms is network overhead, witch is nothing considering how long it takes for the image to render on the screen.

Yes, latency for a single bit of payload to ever hit the client. The rest of the data then needs to be transferred as well. Transferring that amount of data constantly would be terrible with regards to battery life.

> The CPU and the GPU could be several miles apart.

How exactly? Would you send draw commands over the internet?


More data sent via carriers means they will be able to negotiate better deals and peering. So when 10Gbe hits the consumer market, they will let their users have 10Gbe downstream bandwidth, like a HDMI cable. With some compression you could stream 60 fps + sound. Sure it would use a lot of power, but the usage would foremost not be mobile devices, more like TV's for gaming, and workstations for other programs, BC to PC full circle. You could probably do a lot of optimizations like using a vector format, to drastically decrease bandwidth. I do not know exacly how it can be implemented technically, please give me break ... Some times things can be turned upside down and it's really hard to change your way of thinking, like HDDs gets faster then RAM or networks is almost as fast as internal buses. Not saying it is like that today, but it might be soon and it will be hard to break out of old design patterns.




I like how they use the tags <details> and <summary> in their FAQ!

https://github.com/zeit/next.js#faq


What do you like about it? It wasn't immediately obvious to me that those were even clickable and the <details> text does not differ from the <summary> text so it all kind of rolls together in a weird way as your click around.


I think they mean the unexpected usage of semantic markup in the wild. It's an ongoing battle to get it used more broadly, so it's nice to see it popping up unannounced and working so nicely. Your (valid) gripes are mostly out of the repo's control, as all styling appears to be GitHub.


Ah, that's fair. But GitHub does provide quite a few styling options[0].

[0] http://primercss.io/


Those aren't 'options for styling Github', that's just a framework/lib they provide for your own projects.


Oh yeah, limited options with Markdown. Hence the original comment (:


Very cool. I can't wait to try it after five years in Node land. The people behind zeit.co are great minds in the community.

It is funny how concepts come and go in circles. ASP.NET offered unified client and server development, though mostly in C#. It had the nuget package manager and VS store or something, but it was never amazing and packed like npm. Partial page postbacks and page state in encrypted strings..yikes. Now we have that in redux I suppose. It is all so familiar, yet so much better now.


I love how ambitious Zeit is being, but I've found that their apps are not quite as polished as their shiny-packaging makes out. I certainly wish them lots of luck though!


Anything specifically comes to mind? We are constantly iterating and improving.


I'm very impressed that middle-click/ctrl-click works flawlessly. That's increasingly becoming a rare thing on the web these days.

Forward and backward navigation only works up to one level. Is that a limitation of the framework or an intentional design choice?


I tried the demo here, and middle click or ctrl+click is not working, quite the contrary, it is treated like a regular click :/

https://nextgram.now.sh/


in the event handler you have to manually check if ctrl is being held or if the lmb was pressed or not. Most people forget this which is a shame.


Surprised to see after 10 hours no one has mentioned intercooler.js which is a stepping-stone in the direction developers focused on the "server" part of "server-rendered" might head without going as far as Next.js.

http://intercoolerjs.org/

https://news.ycombinator.com/item?id=12657565


This looks very nice indeed. I've also just discovered glamor[1] via the Readme, which looks similarly nice.

https://github.com/threepointone/glamor


This is amazing! totally reminds me of PHP days.


Oh you're still rendering JavaScript client side, how mid-2016.


It's almost November 2016 man, you can't do that anymore. Just Redux your Flux and make sure you transpile and compile your TypeScript before you deploy that micro service with a Makefile


Awesome, took me like less than 10 minutes to create a basic server side newsreader app with React. Simplicity of PHP and power of React combined and brought to Node. I also like how the consolelog statements are shown in dev console


Having to put your program in a string makes it hard to edit. Syntax highlighting, static analysis and tooling in general that helps you filter problems might not work there.

To me, tooling is very important since software is more consistent/reliable/productive at simple repetitive tasks like matching parenthesis, braces, quotes... and in my case I take it further like documenting types via documentation tags and verifying that function signatures and return types match. That alone helps me save a lot of time once the code has grown over 1 kSLOC.

I think it should be replaced to just a filename that gets required.


Can someone explain to me why this page uses 2 GB of memory on my machine?


This looks so incredibly well thought out and designed. I can't wait to start using it!


Very intriguing concepts.

Especially for quickly prototyping an idea.

Getting a React project in place (webpack config, code structure, all the boilerplate, redux,...) swallows quite some time. And I haven't found a bootstrapper yet that I liked.


Have you looked into electrode.io? It's the total package with testing, server side rendering, optional above-the-fold rendering, profiling, etc.

It took about half a day to get used to it, but I enjoyed not having to make the decisions over and over again and handles the basics as well as advanced use cases.


Thanks for the support! I'm glad you like it! :D


It's been really great, and reporting issues and small pull requests has been super easy so far. Not really what I expected from a company like walmart so I was pleasantly surprised.

The only fundamental disagreement I have with it is how the "client" folder is organized by default. I think it's a mistake to organize by the type of file (component, reducers, etc). Instead, the organization should be centered around the real use (pages, resources, etc.) Explained in more detail here, https://medium.com/@alexmngn/how-to-better-organize-your-rea...

I understand that's personal preference, and my preference is born out of seeing more than one react app become a tangled mess because isolation was hard to understand based upon file structure.


Have you checked out https://github.com/facebookincubator/create-react-app

Currently using it on a project and it works seamlessly.


I'm using it too and works nicely. It's the only ReactJS boilerplate that's officially sanctioned by Facebook right now.


[Placeholder for complaint about JS fatigue]


I am really not into new frameworks, quite the opposite, but this is cool and I can see immediate value of this approach thanks to gif movie on the homepage.

Thank you for making this.


How is this different from Meatier https://github.com/mattkrick/meatier?

Meatier also uses Babel, React and Node.js except that it has been around for almost a year and is already stable. They've already solved all the difficult issues like realtime pagination, authentication, GraphQL, etc...


I haven't really looked into meatier in great depth, so correct me if anything I say is a false assumption.

Meatier, as the name suggests, seems to be fundamentally "meatier" than next.js. It's coming from the same monolithic mindset as Meteor, and despite leading with the intro "... but without the monolithic structure", the general architectural thinking still appears to reflect that.

It makes a lot of decisions for you (express, rethinkdb, graphql, redux, etc. - it's production dependency list is gargantuan). You've phrased this as "they've already solved all the difficult issues", but many may see it as "they've removed a lot of choice and flexibility". It's a matter of perspective: the meatier docs do admit it's "overkill" for certain applications.

Next.js, apart from the inclusion of glamor, seems to pretty much largely leave you to your own architectural and library preferences.


Every component in Meatier is replaceable. The defaults are for convenience. It sounds like Next.js also makes a lot of decisions for you.


This sounds great for static websites but I'm not sure if it's a good idea for a dynamic web app where data needs to update on the screen in realtime. Some questions which come to mind:

What if you had a 'chatbox' component which updated every time a user wrote a message; would Next.js have to resend the entire HTML for the 'chatbox' component (containing the full message log) every time a message is added to that chatbox? Am I right to assume that only the affected component will be rerendered? Or the entire page has to be re-rendered (and the entire HTML of the page resent over the wire) for every change in data?

It sounds like a nightmare for caching: If data is updated dynamically and you constantly have to rerender components in realtime on the serverside; you can't really cache every permutation of every component's HTML for every data change and for every user... That's insane.

Regarding CPU, it sounds like it's going to eat up server-side performance and increase hosting costs massively! What, like 10 times 100 times? Are there any benchmarks done on performance for a typical single page app built with Next.js?

Then there is the latency issue...

Finally; if we move back to full server rendering and get rid of the need for client-side code; why would we want to stick to JavaScript?

I haven't used it yet so please correct me if I'm misunderstanding something.

Next.js sounds great for building static websites... But so does PHP!


The way it would work with your chat example is that Next.js would initially render the chatbox component on the server, then hand it over to the user's browser running the React component. Then the user's browser would subscribe to data updates, and would create/update/destroy components as needed as chat messages come in.

The server only sends rendered components once, then the client handles the re-rendering afterwards.

It's still a javascript client-side SPA, except the initial rendering is done by the server. You don't see a loading screen, google can crawl your page, and users can see an initial page with JS turned off).

I hope that cleared things up a bit.


I love the addition of async `getInitialProps` (more for being async than getting props, would be as fine for me as getInitialState).

The logic for rendering loading screen in a component can quickly get tedious and annoying, such a pattern helps having a global loading screen and still allow the component to be responsible of how to fetch its data.


Am I wrong in saying the appeal of this tool is it allows you to process your React app (with little or no changes to accommodate it) on the server?

I've been reading a lot about SSR lately. Correct me if I'm wrong, but wasn't one of the points of thick clients to offload processing to the client?


I don't believe so, though I guess some do. The purpose of (well-written) "thick clients" is to provide real-time interactivity and reduce bandwidth. Beyond this, if something can be done on the server without degrading the app experience, it probably should be.

Added bonus if the site works fine without client-side JS at all (which Next.js does)


Airbnb blog post from 2013 how they share code between client and server: http://nerds.airbnb.com/isomorphic-javascript-future-web-app...


While that article may be relevant, I think the OP is noteworthy is because it is a framework, rather than a novel idea.


Sure. I just wanted to share this real-world example of the "isomorphic js" benefits.


Using the filesystem as the API has been done before, and there is a reason we stopped doing it.


Your comment would be more useful if you gave us that reason.

"Filesystem as the API" in this case looks more like "convention over configuration" (which has worked fine for other frameworks like Rails), it doesn't actually seem to be using the filesystem as the only storage backend.


Don't really agree - what they're doing is very different compared to what, for example Ember does (which they call 'convention over configuration').

Next's routing is much more similar to Jekyll's which quite literally is based on your filesystem file-organisation.


Ah, I see what you're saying. I didn't consider deeper/complex routes with parameters.

This scheme seems to force route parameters into query parameters. Disadvantage: It's not RESTful. Advantage: Your route parameters are now named query parameters. You only handle one dict of parameters...

I'm not sure how I feel about this.


It's really not:

- You can't have beautiful URLs

- you can't regroup small related routes in one file

- you can't remap you urls without changing your project structure

- changing your project structure means changing your urls

- now you have part of your url definition in the file structure and part of it in your code logic

- of course it's harder to do anything using virtual routes: load balancing, aliasas and redirections, generating urls on the fly, etc


When was that, and why did we stop?


PHP + Apache. And we stopped because it's hard to make it evolve and maintain.


Is there a reason the files are ".js" rather than ".jsx"?

As it stands, my text editor defaults to the JS syntax highlighting. I suppose one could make JSX their default JS syntax, but then JSX would incorrectly appear to be correct in non-React files.


Some developers (including FB) are using .js for React JSX files now, instead of the .jsx extension. I think the best argument for this is that, while JSX features aren't in the ES2016/7 spec, they may be eventually. And when you're transpiling your code to JS in the end anyway (potentially using JSX, Flow, and other tools that aren't actually valid ES2016/7 code), it's simpler to call it .js than think about renaming files when you add more steps to your babel process (file.flow.jsx.xyz...).

It hasn't taken off completely, and personally I like having the ".jsx" extension indicate that file will export a React component instead of plain javascript.

There are lots of arguments online about this: https://github.com/facebook/react/issues/3582#issuecomment-8... https://www.reddit.com/r/reactjs/comments/4kkrwg/ask_js_or_j...


I don't think many have embraced the .jsx extension.

And arguably it doesn't make a lot of sense, because when working with most modern frameworks/libs, JSX is not the only non-standard-js element in the file. Should we call it .es6, .es2015, .es2015+jsx ?

So yeah, sticking with .js is common.


ES2015 is standard JS. Maybe not widely deployed, but it's standard.

JSX is not.


If your editor is Visual Studio Code:

    Preferences > Workspace Settings (to target your React project) >

    {
      "files.associations": {
        "*.js": "javascriptreact"
      }
    }


I remember I started out using `.jsx`, but switch to `.js` solely because I could use `require()` without specifying the extension, WITHOUT modifying the module resolution algorithm.



This looks absolutely great! Looking forward to try it out soon


I like this uses the filesystem for the website structure. It reminds me a bit of dokuwiki. Very usable + easy to understand compared to configuring routes.


Very nice, definitely excited to try this out!


I don't understand why you need any client side framework. Couldn't this all be accomplish server side with the HTML pre-fetched on the client if needed for performance? There isn't anything dynamic about the website so it could run with zero javascript and then things like the back button after scrolling through the blog would work.

Why try to outsmart the browser?


Sure it could. But this is also about providing a better user experience. Being able to automatically split the bundle into parts, the most needed of which is loaded at first and the rest then loaded afterwards in the background using a service worker, means that navigation will be far more snappy and provide better feedback to the user. Rather than waiting for the HTML to be downloaded when a link is clicked, most of the UI can be displayed while a smaller payload is downloaded. Obviously if your content is 100% static and your target group is people who only have 2G available, this might not be the best choice.

> then things like the back button after scrolling through the blog would work.

What do you mean?

> Why try to outsmart the browser?

Not sure how this is outsmarting the browser.


If you prefetch the HTML of the page, the browser will have it in its local cache, so clicking the link will be instantaneous (304 Not Modified).

Regarding the back button, you can see the mis-behavior here: 1) Go here: https://zeit.co/blog 2) Scroll all the way down to the bottom 3) Click a link 4) Click back

Expected: Page returns to exactly were you left off Actual: You are scrolled to a random blog post

My overarching question is why any type of SPA is needed for a site like zeit.co?


>But this is also about providing a better user experience.

Breaking the behavior of my back button is one of the worst user experiences I get these days. Usually it makes me so angry that I just leave the site right away.

> Rather than waiting for the HTML to be downloaded when a link is clicked, most of the UI can be displayed while a smaller payload is downloaded.

Browsers can do that on their own since forever, given a carefully crafted HTML page. Of course there are use cases where that is not enough, needing some load-on-scroll mechanism or something like that, but it gets abused all the time for things a browser could handle so much better (with proper HTM of course) - blobs, newspaper articles etc.


> Breaking the behavior of my back button is one of the worst user experiences I get these days. Usually it makes me so angry that I just leave the site right away.

Obviously that is not the wanted user experience. Disregarding SPAs as a solution because of an unintended behaviour is kind of throwing the baby out with the bath water.

> Browsers can do that on their own since forever, given a carefully crafted HTML page.

How are you rendering parts of something from a server before receiving anything from the server using only HTML? I'm talking about the following flow:

1. User clicks link

2. Instantly the layout etc. is rendering using JS, with fancy loading elements (maybe ala how FB does their news feed) for the missing items

3. The data is loaded and put into the layout

Whereas for the non-SPA flow the user would have to wait for data to be received before anything is rendered. Of course pre-fetching alleviates this a bit, but it still requires the whole page to re-render.


How do you leave the site if the back button doesn't work??


I press home, of course. At least sites can't break that button.


So if I understand correctly, this would 'transform' Node into a web framework à la Django? Please correct me if I misunderstood. If that's the case, how Node-Server-Render will compare to Django,Flask and other Python web frameworks?

Performance better on Node? Feedback from the trenches would be appreciated.


It loads pretty fast but the source code is not search engine friendly.


huh?


view-source:https://zeit.co/blog/next

Google bots do not view this rubbish favourably.


https://search.google.com/structured-data/testing-tool/u/0/#...

The Structured Data Testing Tool doesn't complain.

We'd need someone to try it out and get the results of the other Google Search Console tools to know for sure, however.


Try this: view-source:https://www.google.com

Minifying HTML, JS, CSS is a common practice


Which part of it would be considered unfavorable? Perhaps it's harder for a human to read and understand, but all of the content necessary for Google to understand it is present in that HTML source.

You may have missed it if you hadn't scrolled to the right.


Can you back up this claim? Have you seen google.com's own page source?


glamor looks interesting. Has anyone used it?


Off-topic, but what process are people using to make these animated demos? The command-line and browser demo on this page is so clean and crisp. Is it just a screen cap with a ton of post-processing, or is there more to it?


We released DemoKit last week to do precisely this (be able to script demos and record them, that way you can easily re-record them if your product changes, or if there's a typo, etc): http://blog.runkit.com/2016/10/18/introducing-demokit.html


I am using licecap to create the animated gifs - http://www.cockos.com/licecap/


No idea how they've done this one, but https://asciinema.org/ does this for command line demos at least really nicely and easily.


Sceen2gif is the best tool I've used for gifts screencasts, hands down. Wish there was a Linux version.

http://screentogif.codeplex.com


The command line in the demo is Hyper (made by the same people as Next):

https://hyper.is/

Stylizing is done on the terminal itself, and then anything (like Quicktime) could do a recording.


The best I've used so far is Kap: https://getkap.co/ (Open source; currently only available for macOS)


You could use a white desktop background and GifCam[1] / WebMCam[2].

[1] http://blog.bahraniapps.com/gifcam/

[2] https://github.com/thetarkus/WebMCam



I left https://zeit.co/blog/next open in Chrome over lunch and when I came back the tab was out of memory .. so you guys will probably want to look into that. But otherwise A++


Same here. 3 times in a row now, it has crashed Chrome if I left it open long enough for it to do so. Looks like a fun framework, but crashing browsers is kind of a problem.


I wonder if it's related to the animated gif? I've had it open for many hours as well, and it looks like this: http://i.imgur.com/E05vChB.png


Process of elimination and a dozen restarts.

It's not the gif OR the svg (individually) but somehow both together are causing it. I'm on Win7 with 2 gig ram and scrolling this (see pastebin) up from the bottom causes cpu to jump and memory near 1 gig with hdd looping endlessly, so i have to hard power off just to recover any function.

http://pastebin.com/AMdhpWcn

~


Same (chromium50) totally crashed (black screen, ctrl-alt-del fails, offline copy same) specifically, happens when i scroll up towards the ani-gifs.


Same happened to me (Chrome 54)


Javascript development on the web has become such a mess... Web apps are totally bloated, tons of javascript loaded, server side rendering for search engines. Mixing css HTML and javascript together to have a component framework that actually is run within javascript... Not within the browser engine... Its library on top of library on top of library.... Really..? W3C should come up with alternatives, that also work as mobile apps... The web has become an overengineered mess.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: