I access the web through the internet, the same as I have done since 1993. I use software that can make TCP connections, send HTTP requests and receive HTTP responses.
Original netcat yes, all the time. A variety of other TCP clients. Customised Links browser. Several proxies. Customised tnftp. curl in HN examples.
Consider that each web user might visit different websites and seek different information and data. Each user might have different preferences. What is good for one user might not be good for another. Me, I prefer textmode. I am not interested in X11-style terminal emulators. I have not used a mouse outside of the office for over 30 years.
I write simple UNIX programs that extract the information/data I want from HTML or JSON on stdin. I usually output this info/data as plain text, CSV or SQL. For recreational web use I am not interested in "interactive" websites. If I get a response body with the info/data I want, then the website is "working" as desired.
As it happens, the most challenging step for me is deciding what info/data I want to keep. Websites routinely contain so much garbage and marginally useful bits of info/data that processing it all is an exercise in diminishing returns. (And this is precisely what people trying to write "modern" browsers will inevitably try to do.)
Thanks, that's useful. Might be an interesting write-up or tell / show HN type submission.
I do similarly for some cases. I've written scripts to parse down NOAAA weather reports to something that renders well on a terminal, and have scripts and shell functions similar to surfraw (by now badly dated itself) for doing quick lookups from a shell.
I share your view that most website payloads are rubbish (in one case I'd stripped > 90% of the Washington Post's payload to deliver just article text), and that selective acceptance and presentation makes a huge difference.
Though as much as I use shells and terminals, I still find GUIs useful, including Web browsers, though the situation is increasingly frustrating.