Yes it's super slow, but many of the criticisms are missing the point. This enables browsing over SSH when you have low bandwidth. The heavy internetting is done on the remote machine (like a cloud VM for example) using blazing fast data center internet. A personal example may explain why I love this.
For several years I lived in rural Alaska where the fastest internet one could buy was $120 a month (I think) and a blazing 512 Kbps. I was a developer (working remotely) who's shop had adopted Docker, and it literally took more than 24 hours sometimes to download a docker image. By necessity I switched to having my whole development environment on a cloud VM. The Cloud VM had gigabit connection so docker downloads were blazing fast. All I needed to send across the wire was a tiny bit of text. Mosh was an absolute life saver by the way. I once flew from Anchorage to Salt Lake and had the same mosh session pick up like nothing had happened thanks to roaming abilities.
Browsing heavy (i.e. modern) websites was often very difficult too. With high latency and a lot of heavy Javascript sites requiring 10 MB or more, it was a nightmare. I occasionally went up to Eagle Alaska, where internet was even worse. The nearest cell tower was a 4 hour drive away, and the only internet was at the "library" or a crappy satellite link (that far north satellites get less useful). A tool like Browsh is a life line to people in situations like that.
In related news, when people talk about the merits of developing with just Vim vs. an IDE, I also recount the same story.
> This enables browsing over SSH when you have low bandwidth.
Their demo is down now, but I have tried it in 2018 and unfortunately it was pretty bandwidth-heavy (about 100 kB/s while displaying a static webpage): it was permanently refreshing the entire terminal contents. Maybe they have fixed it since then, or it will be better with Mosh (as they suggest on their homepage).
I think that rendering server-side and transferring text as simple text blocks and images as heavily compressed HEIF/WebP in a special graphics client (or even a standard web browser) would be better. I was using something called Ziproxy over GPRS back in 2009 -- it was compressing HTML/CSS on the fly and recompressing images to JPEG with terrible quality.
> I think that rendering server-side and transferring text as simple text blocks and images as heavily compressed HEIF/WebP in a special graphics client...
That is almost achieved when you run a real web browser inside VNC, and connect to it using a VNC client that does the appropriate compression. (Tunnelled through ssh, obviously.)
> In related news, when people talk about the merits of developing with just Vim vs. an IDE, I also recount the same story.
I've been in similar situations and while I've honestly tried to give Vim a go, there just seemed like to much setup and configuration so I went back to an IDE. In my case sshfs (https://en.wikipedia.org/wiki/SSHFS) was the lifesaver. Code locally, execute remotely. 99% of the time, I'd finish making a code change in the IDE and by the time I'd switched to the terminal on the remote server, it had already sync'ed.
> I've honestly tried to give Vim a go, there just seemed like to much setup and configuration so I went back to an IDE.
I'd like to point out something to avoid scaring away potential Vim newcomers here.
While it's true that Vim can get as complex as you want via plugins and configuration, I think that one of beauties of developing with Vim is not doing this, spending a lot of effort configuring it to behave like a complex IDE, but rather changing a bit your working paradigm to rely only on just the basics and, at most, a few tweaks on a custom .vimrc which you can quickly scp over (or wget) to the host where you are working on.
Here I will totally agree that at the beginning it may require some time to get used to it. But once you get there, the level of freedom that it gives you is totally worth it. And not only because then any shell feels like home, but also because not having all the IDE helpers available also ends up forcing you to have some additional awareness about what you are doing on the code. You need to remember what the function signatures are, where the different pieces of code are, take care of the coding style while typing, etc. And, while it's truly a much more spartan experience, at the end your efficiency stops depending on your working environment to become just a part of you.
I've been using Vim for 6ish years now and one of the biggest issues I have is all the plugins randomly breaking. Especially true for IDE like features, such as auto-indent, auto-brackets stuff etc.
Another issue I run into, also due to a overly complex config, is that my hotkeys don't work across OSes. I mostly work on a mac and use for instance crtl+shift and the arrow to move between tabs in vim. Those keys do different stuff on my linux machine and I can't use them.
This could be implemented at the site level (unfortunately with bloated frameworks, few care). One possible implementation could have the browser measure how long it takes to load assets and switch to requesting lightweight ones if loading takes too long.
Oftentimes, the real killer is not so much bandwidth but latency. If a page requires many requests to complete sequentially (ie. because one asset needs to be loaded in order to know how to request the next one), that all adds up real quick and produces a poor user experience.
'...roaming abilities': can you explain?
As a Europeon, 'roaming' is travelling to a different country and using a foreign network, but paying our home provider. Do the US states consider is Colorado different to California? I always understood 'not different', and I may have misunderstood a tech element.
Regular SSH won't work if your IP address changes since it uses TCP where sessions are tied to (IP address, port) tuples. However mosh uses UDP and its own session management scheme, so you can "roam" between IP addresses and your session will stay alive, you can continue typing and will receive screen updates as if nothing has happened.
"roaming" as a technical term just refers to any case where you move between networks and something maintains a connection. mobile phone network roaming is just one case of that, but e.g. moving to a different WLAN access point is the same. In the case of mosh, you can change your IP but it will still continue the existing session with the server (unlike SSH, which will fail the connection)
Normal SSH connections drop when you switch WiFi networks.
With mosh helps you keep your connection alive when roaming, e.g you start a connection on a cafè Wi-Fi while having breakfast, connect to your VM while tethering your phone, go home and switch to your home wifi
While I did enjoy the fun, my primary reason for moving up there was to escape the heat. I have a medical condition that flares up in the heat. There are good temperate places in the USA pacific northwest, but cost of living there was so high that there's no way I could have done it on Salt Lake City based wages. Even just the higher CoL in Alaska was pretty painful.
> - also w3m.
Hadn't seen this, will check it out :-)
> - why don't you git clone everything to your laptop when you have a chance? then keeping it up to date won't be expensive.
I definitely did, but we had a ton of services with new ones springing up all the time, so even though I would grab what I could when I could, there was still frequent needs for more. Also git wasn't the worst part, the worst part was downloading docker images (base images and prod images). Since they change regularly, sometimes daily, there was no real way around it.
> There are good temperate places in the USA pacific northwest, but cost of living there was so high that there's no way I could have done it on Salt Lake City based wages. Even just the higher CoL in Alaska was pretty painful.
have you looked into oregon outside of portland? say about 30 to 40 min out? it's not trump country, and it's still cool and not that expensive. i would say cheaper than alaska. same for seattle/WA.
also you can consider zfs or btrfs as filesystem since they can offer block level sync. not always possible, but this can change things for you big time.
Have you tried using links? Most sites are broken on links and the other classic text browsers. This renders modern Firefox (before textifying), so you don't have that problem.
The creator of Browsh here. I have such mixed feelings about seeing Browsh here again. I poured soooo much geeky passion into it, but I've just not had the opportunity to take it to the next step since its initial rise to stardom.
There have been a couple of contributors recently and I haven't even been able to get the CI to run because I've lost The Knowledge. I even had to take the demo services down `ssh brow.sh` and https://html.brow.sh just because there was a bug and I couldn't remember how everything worked.
The plan for the next step is to write a dedicated text-based UDP protocol, maybe with some video compression tricks, so there's no dependency on Mosh then (whose development also seems to have stalled BTW). That way the client will be extremely lightweight, and work in either a normal browser or a tiny CLI application.
As others have faithfully recounted, the entire raison d'etre of Browsh is to fight against the increasing bloat (and bandwidth costs) of the web. I travel a lot outside the Western world and am often surprised just how many MBs I need to consume the wealth of text on the internet.
I hope the next time Browsh arrives on the frontpage is because of a new version.
Heya, we (Mosh) are fans of Browsh and are happy to talk if we can be helpful! It's definitely doable to expose (and directly link with) the Mosh networking and terminal libraries if you're unhappy depending on the executables. You're not wrong that our release cadence has also fallen off a cliff, probably for similar reasons to you (this has been a tough year for our active maintainer).
Honestly I think of Mosh as mostly "done" at this point (with the exception of 24-bit color support which everybody wants and which we have in Git), and I'm wary of stepping back into the ring myself as the original maintainer who hasn't looked at the code in a long time, and screwing up a release that ends up botching our so-far-good (knock on wood) security record. Which will just take up way MORE time...
I think my terminal's font is breaking the v2 graphs because the unfilled braille dots aren't blank rather unfilled circles. The v3 version doesn't work as my terminal doesn't support images.
On my Windows box it actually does work surprisingly well with the official curl binary and the graph looks great but the weather emoji mess up alignment towards the end.
I think these kinds of inconsistencies are the reason something like the way brow.sh does it makes more sense, if it's not plain text the only reliable way to represent it is via colored fixed width block characters as terminal UIs have been relying on those for ages so they are more widely supported.
A font that renders the empty braille pattern with unfilled dots seems very inaccessible to the visually impaired. Empty braille pattern sections should arguably always be blank.
No, unless it's getting info from somewhere I'm not thinking of, it has no way of knowing what terminal you're using. In principle, I think the "accept" header is the right place to put that info?
but there's no good way for a server to tell if it's requested by a text based client, wttr.in is hard coding a list of text clients' User-Agent [0], which is not so pretty imo. If look for "Accept: text/plain" in request header, curl as the most popular client won't set it.
It’s good to see terminal-based browsers are still being developed.
I used to use w3m and lynx back in the day, and found them to be very useful at times (such as only having a GPRS signal available - makes browsing on a “full fat” browser impossible). Paired with screen (though I guess mosh would be a good option these days) to enable a resumable session in case the network dropped, of course.
Note that this isn't a "true" terminal-based browser. It's built on a WebExtension running in Firefox that renders out the DOM as text to a Go-based CLI client. It's not a browser engine running in your terminal, it's more of a VNC over TTY.
"On 17 March 2017, OpenBSD removed ELinks from its ports tree, citing concerns with security issues and lack of responsiveness from the developers.[5]"
Last stable is from 2009. Last development release 2012. Elinks is dead. Realize it uses an old version of Spidermonkey (JS). Elinks should not be used in a production environment, period.
Not just browsing (the web) - the idea of an HTML-based application UI can be extended to the text mode instead of getting stuck with the choice between a GUI and a command-line (or a custom curses-based) interface. (Imagine VSCode running in a terminal!)
Looks like the author had to take some of the services temporarily offline. If you use Brow.sh, consider donating!
" Browsh is currently maintained and funded by one person. If you'd like to see Browsh continue to help those with slow and/or expensive Internet, please consider donating. "
This is really interesting, I'll have to play around it. I've been browsing news sites with "links" a lot more lately as it usually just gets me the content I want and it does so very fast. No ads, no popups, no JS - just the content. I could achieve the same thing with noscript and a few other extensions but those generally still aren't as fast and simple as links.
I could have really used this a while back while doing a web scraping job. While using puppeteer(with chromium), I found that memory usage was quite a huge problem so the best I could do was write scripts explicitly preventing the loading of images and other multimedia assets. I found it still wasn't always enough.
I've often thought that ad blockers get it all wrong, we shouldn't be identifying the ads to block them, we should be identifying the desired content and serving it up to the exclusion of everything else.
The run-a-server-elsewhere-and-connect model of Brow.sh seems ideal for building something that did that:
- render the page server side
- check for crowd-sourced filters
- apply them
- pass page to user
I'd happy make cryptocurrency micropayments to whoever contributed the filter that de-bloated that page for me.
It would be slow at first, but with a big enough cache and enough users...
Glad to know about this project. I may try and adapt it to man-in-the-middle my own web browsing one day.
This is almost exactly what I've had in the back of my mind for years now.
I've been trying to prototype a browser with the aim of extracting semantic content from webpages and presenting them in a uniform manner (with the user able to choose/modify the templates). I try to explain it to less technical people as a "permanent safari reader mode for all types of web content - articles, ecommerce pages, recipes etc".
The current model is similar to brow.sh (or opera's vpn etc), the browser makes the request via the server, which extracts the content and serves back the minimal representation. The client then has its own templates saved meaning all layout happens client side and so only the content has to go over the wire.
The user then has control over how they view the content, all images are forced to be lazy loaded (with lqip previews currently) and I'm interested in how to take it further such as extracting the structure of a website navigation and always presenting it in a uniform fashion.
My prototype kinda works for browsing hn/reddit/lobsters and articles and is pretty snappy but content extraction is the hard part that's kept me from starting for a long time.
I've always held off on sharing this because
1) I didn't have a hn account (can't get sucked into an argument about javascript if you can't comment)
2) I was never sure how to present the idea and how exactly to frame it compared to reader mode, adblockers, AMP, Brave, surfing without JS and a ton of different alternatives
If anyones interested I'd love to discuss it and maybe get my prototype live somewhere if there's interest (email address in my about)
Back in the 90s I developed a new HR-related web application for a customer in China. Then we found out that they still had a significant number of users on serial terminals, basically a VT-100 with support for Chinese characters.
So I made the app work with Lynx text mode browser. Magic getting the Internet on a serial terminal.
It seems to be incredibly slow which kind of defeats the purpose of having a text-based browser to begin with. I'm not sure, but it looks like it's taking the output of a headless Firefox and turning it into ascii. If that's the case I'd rather stick with something like Lynx, etc.
The point is when you have a connection with limited bandwidth. In such cases you can host browsh in the cloud and connect to it with ssh/mosh. You get a much better experience than text based browsers at a limited bandwidth.
It's great for such cases - I use it when hiking / working away from good internet.
Thank you, I just learned about Mosh and it's absolutely awesome! I have to switch VPN regularly in my job and it's a real pain in the ass working with multiple ongoing ssh sessions. No more!
I run into this a lot travelling in places with bad internet and hiking. I just want to check a route or some place and travel bloggers and their bloated ad-riddled popup-covered sites take forever to load.
I mentioned in another thread an alternative browser I've been prototyping to solve this problem. I'd love if you could tell me more about what you need from something like this. All my prototyping so far has been on desktop but the idea in my head has always been for a mobile app since that's often all I have when travelling, but there's no reason for it not to work on desktop too.
As this comment in the thread says, it looks like it uses a lot of constant bandwidth as it's consttantly querying for the latest visuals from the headless FF instance[0]
Elinks is a hauntingly good software. I'm a bit sad that it's not actively developed, but I find solace in the thought that it is already a nearly perfect crystal that cannot be substantially improved upon.
last time I tried this it was great on my laptop, but because it uses Firefox in the background for rendering, it brought my t2 small instance to its knees.
I wonder if they've done anything clever to make it more efficient since then?
Hmm just tried it... It's ok BUT not too happy with it overall..
On Mac terminal the blocks don't align properly, and on Xterm it starts rolling weird like a CRT TV with the scan lines missing.
Also the controls are not using a normal menu and keys that we know from lynx/links/elinks. And the donation nag thing is annoying. First convince me it's great, then ask for donations :)
When I first saw this a while back I got the idea to build a terminal based email client that supported rendering HTML messages using some of the technology behind this.
This is great for remote work where ssh -X lags with high latency. Although I sometimes get stuck (page loading forever. I think it has something to do with the connection to Firefox.)
All I really need is showing imgs in the console, which w3m-img can do along with w3m but you need an xterm instead of the modern consoles(gnome-terminal, konsole,lxterminal, none can show imgs)
the docker images of brow.sh needs firefox and the whole pull size is a few hundred MB, can someone list the advantages over lynx/w3m/links2 etc.
> Its main purpose is to be run on a remote server and accessed via SSH/Mosh or the in-browser HTML service in order to significantly reduce bandwidth and thus both increase browsing speeds and decrease bandwidth costs.
I'm glad stuff like this does because I often forget about it, or somehow missed it before.
That said I do suspect a lot of the repeats are because there are young people on HN for whom this may be the first time since reaching an age of reason that this has hit the front page.
ya that's fine - stuff resurfaces every once in awhile etc -- but mark it as so (2018) so we know it's not something new and there's years of discussion already.
It says it's "reducing bandwidth", but then it's downloading all the graphics assests so it can render them in very crappy unicode? Why? Wouldn't it make more sense not to download the graphics at all?
As others have said the point is that you run it on a server that has ample bandwidth, and your client connects via terminal. So you only end up transmitting the "crappy unicode" over the narrow pipe to your end thin client.
For several years I lived in rural Alaska where the fastest internet one could buy was $120 a month (I think) and a blazing 512 Kbps. I was a developer (working remotely) who's shop had adopted Docker, and it literally took more than 24 hours sometimes to download a docker image. By necessity I switched to having my whole development environment on a cloud VM. The Cloud VM had gigabit connection so docker downloads were blazing fast. All I needed to send across the wire was a tiny bit of text. Mosh was an absolute life saver by the way. I once flew from Anchorage to Salt Lake and had the same mosh session pick up like nothing had happened thanks to roaming abilities.
Browsing heavy (i.e. modern) websites was often very difficult too. With high latency and a lot of heavy Javascript sites requiring 10 MB or more, it was a nightmare. I occasionally went up to Eagle Alaska, where internet was even worse. The nearest cell tower was a 4 hour drive away, and the only internet was at the "library" or a crappy satellite link (that far north satellites get less useful). A tool like Browsh is a life line to people in situations like that.
In related news, when people talk about the merits of developing with just Vim vs. an IDE, I also recount the same story.