Hacker News new | past | comments | ask | show | jobs | submit login
VRML (wikipedia.org)
150 points by vmoore on April 16, 2022 | hide | past | favorite | 144 comments



Surprisingly little previous discussion:

VRML was a standard in 1995 but now everyone's excited about The Metaverse - https://news.ycombinator.com/item?id=29237519 - Nov 2021 (4 comments)

Ask HN: Is VRML (Virtual Reality Modeling Language) Relevant to Metaverse? - https://news.ycombinator.com/item?id=29111474 - Nov 2021 (2 comments)

VRML – Virtual Reality Markup Language - https://news.ycombinator.com/item?id=29046541 - Oct 2021 (3 comments)

VRML (1997) - https://news.ycombinator.com/item?id=28374558 - Aug 2021 (1 comment)

A free Java VRML Viewer - https://news.ycombinator.com/item?id=24805346 - Oct 2020 (1 comment)


One of those is my submission. I posted it in response to all the buzz over figuring out VR markup.

It seems to me no one wants to actually use existing stuff. People are just having fun reinventing it.


During my limited time at FRL (i.e. Meta), I tried to be a proponent of X3D (successor of VRML), and the push back I got was tremendous.

You're right that people want to reinvent it.


Well, thanks for fighting for standards adoption, even if it didn't work out. It's always good to have interoperability represented on the inside.


how did that pushback look like? to old? failed technology?

i saw a lot of excitement around VRML at the university in the late 90s.

the problem was that average computers just didn't have the capacity for 3d rendering back then. at the university we had high end SGI workstations, and of course these were used for impressive demonstrations of what VRML could do. but i guess because noone outside had computers capable enough it didn't catch on.

the current generation is not aware of what we did back then and may not realize that lack of performance was the primary problem we were facing.


There's a bunch I can't really say, but I think a bunch of forms from bad incentives which don't align to any long term goals.

My opinion is that the key idea of the metaverse definitely requires a standard like HTML which people can just hack on with text files. X3D is the closest in my opinion of achieving that, and I argued we should make X3D a first class citizen to overcome performance challenges of the browser/device.

Even today, pick whatever web framework, and it just boils down to HTML. There really should be a common 3D substrate that is that easy.


Let’s stipulate for the purpose of this discussion that VRML is satisfactory, though IIRC it isn’t for various reasons. The real problem in the 90’s and today is that we haven’t figured out user interface hardware that people can/want to actually use for extended periods of time.

Even today, literally every player in this space thinks it’s going to be a head mounted display/goggles/glasses. Until someone produces 3D/holo displays that don’t require gadgets affixed to your head and body in one way or another, I think AR/VR is going to be a failure for general consumer use no matter how slick the underlying rendering technology.

Edited to add: something like Tilt5 where it’s very inexpensive and you aren’t expected to wear the device forever might be an exception.


i agree that VR is still far away from taking off.

i expect the solution is going to be something like CAVE, which we already had working in the 90s. by today it is feasible to have a 4-sided projector mounted in a room so that you can project a 3d environment to multiple walls and give you the feeling of being inside. 3D glasses like those used in movies today are light enough to help with the 3D effects too.

but VRML wasn't just for VR. we are playing 3D games without VR for quite some time now, and there is no reason we couldn't use VRML or X3D today to create interesting experiences.

in the end it comes down to how easy it is to create and distribute the content.

what we need is another drive into browser support where we can embed a 3D object easily, and then maybe it will catch on.


Gamers often bought 3D cards for PCs by the late nineties.


I'm guessing that since this is a text-based markup language, a lot of people wanted a format that had a lot less friction in a compute-constrained mobile environment, preferably something that can be piped straight to a GPU.

HTML is fine for the browser because it is mostly delivering text and JS, which are both processed in a CPU. Text files are inherently GPU-hostile, and that makes them a weird choice for a graphical markup language.


> Text files are inherently GPU-hostile, and that makes them a weird choice for a graphical markup language.

The point of a standard is to be device agnostic. Tying it to present-day technical implementations would limit its adaptability to future tools, as well as creative misuse of the technology.


Then why tie the format to present-day technical tools like text editors? Why not invest in a format that makes sense and the tools to understand and work with it?

Also, GPUs and their architecture aren't a fad, they have been the same for the last 30 years. If you want your format to be truly device agnostic, text is a bad format because it precludes many types of devices (GPUs and hardware accelerators) from being able to use it.


Text had been then same for the past 5000 years or so, predating text editors by an enormous margin.


As far as i know HTML and JS are also text formats which must be interpreted. So i do not see any advantage between parsing a text format or HTML or JS.


How are text files gpu hostile?


Text files are very hard to efficiently parse in parallel because there are so many variable-length and conditionally present fields. That necessarily leads to branches in your code which GPUs are not designed for.


It depends on the format of the text, the parser for that format, the language of the parser, and many other factors. Such a broad affirmative statement that text files are inefficient is tenuous at best.


We need some technology that turns text files into binary. To the startupmobile!


Hmm sounds like a compiler


Then pad fields and require them, even if the value is Null/None/Nil, and the field names too.

  USERNAME:dotancohen\0\0\0\0\0\0
  FOOBAR\0\0:\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0
This might look ugly if you open it in Notepad, but so do XML and Org mode files. However an Emacs/VIM/VsCode plugin could make this seamless for the user.


They might not look as pleasant as Markdown, but Org mode files still look good in a plain text editor like Notepad.


>> people want to reinvent it.

Maybe with good reason: If it didn't take off last time maybe there are hidden stumbling block in the specification. Trying fresh might unconsciously bring the right ingredients this time. There is the faint possibility that specification and implementations were correct, just too far ahead of its time and infrastructure; it's usually worth taking this risk though. And of cause reinventing means owning and deeply understanding.


Nah, VRML by v2 was pretty complete.

We are literally talking about a kind of amnesia and generational gap in knowledge transfer here. I was there for VRML.

You could almost make a 2022 business model redoing already solved problems and pretending you are inventing new solutions.

Technology doesn’t always linearly progress, sometimes later solutions are technically worse than older ones due to “don’t care” optimization values or lost knowledge or purely social or political reasons. There are also trade offs like portability-performance etc, I’m not denying, but IMO much web “innovation” is useless generational churn and busywork… to say nothing of humans trends and groupthink.


Maybe because the hardware wasn't ready.

Starting fresh could easily be the reason it's failing again because much time is wasted in reinventing the wheel.

The problem is still the use case not the tooling.


One problem is that no format would solve the "who's in the room with you problem". You still need some kind of backend for that.

You could see a future though where you go into one portal and you're in Minecraft, and you go into another and you're playing tennis with your buddy in Europe. Games would just be new portals -- the key here is that it's hosted by different companies, not Meta.


Part of the problem was that I couldn't even get people to dig into the specification.

I came away with the distinct impression that people really don't want to admit that XML is good enough.


Thank Java and overcooked configuration files - everyone hates XML because it did too much for what it was being used for. And yet html is precisely xml, with attached rendering rules and a dialect, but nevertheless it doesn't garner as much hatred as XML.

For complex object description it excels - as anyone who has used a complex UI description language knows, the 'lite' Json, yml, insert hipster conf file format here face a myriad of problems in description that they only somewhat make up for in simplicity.

It is the flaw of misattributation of Occams rule to everything - simplicity is indeed efficient but it cannot account for complexity, and the world ends up a complex place.


And they can't admit that the specification was fine - don't that would force them to effectively admit that there is a very small audience for this.


A little overwrought...

"3d in a browser" sounds like a fairly broad use case. Three.js is fairly popular I hear.


No. It's pure 'not invented here.' If your 3d format can do NURBS it can do anything.


  > If your 3d format can do NURBS it can do anything.
If your 3d format can do NURBS, it can theoretically given unlimited CPU and memory _display_ anything. But that doesn't mean that it can communicate with other devices over the network with low latency. Or can effectively transmit new objects - or people - in your vicinity over the network fast enough to interact with them - especially if arbitrary detail is important, think about a doctor looking into a patient's ear. Or represent moods, emotions, feelings, etc., or even sounds. We don't even know if smells or tastes are next!


Sounds like a solution in search of a problem. But, you're free to add your own extensions on to x3d, if you want to give each part of your sub-assembly a "mood" or "taste" and still have your artifacts be interoperable with the rest of the world.


The 3D format is not responsible for those items. If my proximity to an object triggers things it’s really no different than passing a call to another layer. Think web resources and hyperlink agnosticism


The common piece I see being talked around here is ownership. I bet that, more than anything else, is what drives the pushback. I was there too. People do not want open 3D, because "land grab", next really big, big thing.

"If only WWW would have charged to make a link...."


People have been floating the idea of 'micro-transactions' since the first days of the commercial world-wide-web. People just don't want to do it. There's a reason why average users made the choice to use the wider internet, instead of things like Minitel.


Totally!

But that won't stop developers and founders from doing gatekeeping management math.


Reuse requires reading. I always figured Meta for a wheel reinventor. I always ignore their reach outs.


All this Metaverse buzz also reminds me of SecondLife[1][2] which was quite popular for a short period of time, then quickly vanished from horizon.

[1] https://en.m.wikipedia.org/wiki/Second_Life

[2] https://secondlife.com/


Exactly my thoughts. They even had their own currency, Linden Dollars!


Not 3d, but a big percentage of my career has been in platform groups and leadership, and it still surprises me that most people who are drawn to this type of work seemingly only do so to make a platform or standard from scratch, not actually to align around existing shared solutions.


VRML is a relic from the 90s. Complaining that nobody wants to use it now for modern VR is like complaining that Netflix doesn't use MPEG1. Netflix is just having fun reinventing video compression!


And the reason VRML even exists is that somebody just wanted to have fun reinventing 3D graphics in the 90's, so it doesn't deserve to be used just for the sake of not reinventing the wheel. It's an old shitty wheel that was a reinvention of another invention(*), done just to be fun to implement, but not fun to use.

(*) It's funny because it's true that VRML is literally a reinvention of Inventor, and Open Inventor is a reinvention of VRML.

http://www.verycomputer.com/288_b61771df97de6635_1.htm

https://en.wikipedia.org/wiki/VRML#Standardization

https://en.wikipedia.org/wiki/Open_Inventor


You realize that h.264 was started in the 90s, right?


What's that got to do with anything?


The video compression that Netflix did not invent is h.264 that was invented in the 90s. Just like VRML.


I wasn't saying that everything invented in the 90s is now a relic. Lots of technology from then has stood the test of time and is still useful. But VRML is not one of them! It wasn't any good even at the time.


VRML is still used today. Some 3d printers take in VRML as file format, because unlike STL you can specify different colors and materials.[0][1] It's the only open multimaterial format said machines accept.

[0]https://www.smg3d.co.uk/design_series/objet260_connex1 [1]https://prostir3d.com/en/equipment/43-photopolymers/multi-ph...


I'm surprised 3MF doesn't have it.

The history of 3D formats is littered with weird stuff that doesn't really describe everything and needs separate texture files and metadata.

Meanwhile other things like text documents have embedded thumbnails and all kinds of stuff.

I'd expect 3D to have multi-material plus texture support(So the same file works in VR and 3DP) by now.


I'm pretty certain that 3MF does have that support. I had ordered STLs for miniatures from Hero Forge before, which I converted to 3MF and was able to paint and texture with Microsoft's 3D Builder. So there is definitely colour and texture support. I'm absolutely certain that physical materials are also supported in 3MF based on examinations and experimentation with the format around the time I was digitally painting my minis.


I remember the hype around VRML when it first became a thing. It really was the same kind of hype surrounding the metaverse today. Turns out that doing most things in virtual 3D space is less efficient than just presenting them in flat 2D. The false belief is that more dimensions = better. Now this belief has materialized again as Zuck’s new baby, and I’m confident it will find the same end.


There is a reason we are still using mouse and keyboard almost 60 years later (first prototype 1964!)

It-just-works


I was going to mention command lines and how they mimic teletypes printing on reels of paper, but you beat me to it by a couple hours.


Or was it because the equipment available at the time was large, bulky and unaffordable?

I find 3D much better when sim-racing in VR, for example. Games are more immersive, and even meetings in VR are less fatiguing than via video conference.


Does the presence of this post indicate that VRML is going to be relevant for ...anything... ever again?

I have an old t-shirt around somewhere from a VRML event in the mid-90's (back when I was tinkering on making dumb little scenes a cracked AutoCAD)... yay, it's relevant again :)

Personally, I've felt for a long time that as video cards and GPUs have made rendering a buzzillion polygons per second tenable, operating system developers should rethink their attachment to the two dimensional desktop metaphor that's been the interface for over three decades now. Whenever I suggest this, people tend to knee-jerk on how silly the Jurassic Park scene is with the SGI filesystem navigator (ya, the "It's unix! I know this" scene). Yep, that was the extent of our imagination working within the constraints thirty years ago but I do believe we can and should do better. But I have little confidence in the capacity of Meta or Microsoft to drive that kind of innovation, the creativity and incentives within those organizations will thwart any breakthroughs.


I feel there are a lot of compelling UI opportunities for non-gaming uses without headsets, eye tracking, holographic hardware, etc. Back when VRML was a thing, navigating a 3d space with a mouse was painful. However, with just the graphics rendering, touch displays and gestures that have commodity availability now I feel like we can break out of the icons/desktop/menus paradigm to interface with applications and data. In fact, I question whether anything of great utility can come of the fancy new AR, holograms, etc if we can't even think outside the old 2D box to interface with applications and data with the graphics rendering, touch displays and gestures widely and cheaply that are ubiquitously available right now.

The dismissal of 3d as interface for applications and data is emblematic of the absence of creativity cited in my original comment. Yes, we're accustomed to navigating folder hierarchies and invoking discrete functionality but outside of that paradigm data can be organized and retrieved more effectively. IOW, when presented with steam and internal combustion capabilities all you still want is a faster horse it's a lapse of the imagination.


Personally, I don't see VR/AR/Holograms as an automatic improvement over 2D screens when it comes to UIs, those techs primary contribution is making things convincingly 3D and pretty, non-gaming UIs don't (primarily) exist to be 3D and pretty, they exist to present to you the actions you can do to a system and display the effects of said actions and the overall current state of the system. When you put it like that, it's much less clear why making things 3D is necessarily an improvement.

Exceptions are many. I remember seeing Iron Man and being mind blown by the sheer ergonomicity of Stark's hologram interfaces. That's because design and engineering of the kind that Stark does are inherently tactile, he grabs hologramic models of gadgets in his hand and rotates them as if they are really there. That's insanely powerful. Any kind of design, exploratory work, or education where tactile, haptic and even thermo-pressure interfaces are superior to traditional interfaces would vastly benefit from AR and the associated technologies.

Another thing is practical crafts where looking back and forth to a screen is impossible or infeasible, but imagine building a circuit while wearing glasses/contact lens showing a HUD where an annotated schematic of the circuit is displayed, or repairing a car following a graphical step-by-step tutorial (the kind shown in games) overlaid on top of the car, relevant parts of the car glow to get your attention and text appears at exactly the right time to remind you of steps. This is a whole new way of encoding knowledge, YouTube is already a small revolution in the instruction of practical crafts and cooking, imagine how much more of procedural muscle-memory knowledge we can encode with this.

The 2 examples above are just to point out that I'm immensely excited for cheap and ubiquitous VR and AR and I have no shortage of fantasies about them. I'm also not saying that making boring UIs with VR and AR wouldn't make them more entertaining, maybe making tweets float around you would really make Twitter users more amused and engaged (a bad thing), although it adds no real functional value. All I'm saying is that last claim, most current UIs wouldn't benefit substantially from VR and AR except for entertainment value. They wouldn't make it any easier or more efficient on the user. What's Excel with 3D tables? a slightly more confusing Excel.

The real general revolution in UI and interfaces is Neural Interfaces. Even the most basic and primitive neural interface that is basically just a vim-like system for composing a few basic mental gestures into re-bindable commands would be a massive productivity boost, every time you move the mouse or press the keyboard your thoughts start and end before your hands have done anything, imagine the raw speed if your thoughts alone are driving the computer.

Forget graphical tutorials, neural interfaces would conceivably allow us to download mental models and physical skills, Matrix-style. I don't think it would be easy and I'm not even sure it's possible, but boy oh boy, is that a fun thought to imagine. It would obsolete VR and AR entirely because you can just reach into people's brain and plant images, audios, haptic sensations and emotions at will, bypassing the senses and the body completely, and possibly inventing new senses. (e.g. Zap your brain with a certain pattern of electricity representing the earth's magnetic field or the stock market dynamics, after a while it will fade into unconscious and you will have a constant gut feelings for the system that the patterns represent.)

Low level access to neurons will be a gateway to wonders and horrors beyond our imagination.


I disagree that there is no practical benefit when you add the creative elements of 3D to the display of traditional 2D content. Reading a line of text is fundamentally the same in whatever format you consume - but there are serious interactions with that line of text which only become possible in an animated free space. Consider learning to read in a foreign language where HMD eye tracking is used to infer difficulty on a certain word that triggers additional supporting materials. There are thousands of examples yet to be explored and the impact to well established existing 2D information systems will be dramatic. Implementing 3D layers will offer a double win of increased functional utility combined with a nicer more human fitting and artfully expressed interface.


You are making points about potential creative applications - BTW I thoroughly disagree on your language example, comp learning people are constantly making this mistake, we don't need better visualization to learn foreign languages, it is really all about practice, which unfortunately for you has nothing to do with your scenario - the 'triggering additional supporting materials' has nothing to do with the user actually practicing, at best you have an overly complex hyper-micro-optimization that will bombard the user with more unhelpful material...

The issue in the parent comment had to do with user interface e.g. for OS - I think the concept is doomed in the general sense - people live in houses, which are mostly reduced to a set of 2d interfaces - having houses in your house is not helpful, its literally just more confusing.

This the general issue - if the assumption is 3D is better than 2D then we should aspire to do everything in 4D, 5D, etc. Apart from obvious physical limitations, there is a good reason we don't do this. Our computer systems already are n-dimensional - dimensions are useful for storing complexity. We crave simplicity, though, this is why 2D is so popular, we reduce complex n-dimensional models to 2 dimensional ones.

As programmers, we even reduce it to a single, textual dimension - being able to follow a single thread is often all we can easily reason about. Many, many people prefer reading or listening to audio over watching pictures - TV shows can be nice to veg out to, but they are much harder and more complex to dig into and really engage with.

That's why there is no good use case for a 3D OS shell, for the majority of people it doesn't provide adequate visualization value for the added complexity. To a systems engineer, there could be some value in viewing OS components as parts of a car engine, perhaps, indeed a lot of useful tooling seeks to visualize this type as stuff as much as possible. But your average Joe just needs email and maybe pictures and video - sticking them in a 3D environment just makes them more difficult to use.


You may have been interested in meta-glasses back in the day(1) I had high hopes for them - especially the hand tracking and being able to manipulate virtual objects. Alas, it wasn't to be...

1) https://www.youtube.com/watch?v=b7I7JuQXttw


Sounds like GP would be more into the OpenBCI project Galea in partnership with Valve and Tobii [1]

[1] https://www.roadtovr.com/valve-openbci-immersive-vr-games/


The way I learnt about VRML was FAS website.

As a young teenager, I used to obsess over fighter planes. Then I stumbled upon the weapons & equipment pages on Federation of American Scientists (FAS) website. See [1] as an example. They had WRL files (a VRML format) for most planes in their inventory.

15 years ago that was cool.

(1) https://nuke.fas.org/guide/russia/airdef/mig-23.htm


I vividly remember watching Mir space station deorbiting in VRML in real-time in 2001.

It seemed normal back then, but looking back it is unbelievable how smooth the whole experience was given the state of the internet in 2001.

Edit: Wow, the site still exists! http://www.parallelgraphics.com/vrml/mir2/


Looks wonderful in 2022:

"Each scene is 100KB in size. It may take up to a minute for the scene to load, so please be patient."


"Sorry, this platform is not supported by Cortona VRML browser which is needed to display VRML scenes. "

No recommendations on what is supported though. I'm guessing best viewed in IE at 800x600


I worked for an industry-specific 3D cad company at the time. We wanted to create some kind of web-enabled version of our software circa 2003-2005 (totally guessing at the time frame but seems right). It feels like every choice we made was wrong and the product was never viable. VRML sort of worked with a plug-in but there was insufficient control over navigation (let alone object selection) so you couldn't really build an app "on top of" it. We also had 2D drawings, displaying them as SVG via a plug-in wasn't super good and no way could you do things like allow the user to add a fresh annotation to the SVG interactively. UI? Well, someone had the grand idea to write a mock version of Tk that would output HTML instead of X Windows API calls. That worked about as well as you'd think. Ouch.

I don't know what someone operating at a more expert level would have been able to do; I was entirely self-educated in everything about browsers and had no peers to learn from. As it was, it was an exercise in frustration that was mostly shown at one trade show and then better forgotten.


Don't blame yourself. In 2003 there really were no good options for what you wanted to do.

Java Applets did exist. I think you could do 3D[0] at that time. But given that you were working with a CAD app you were almost certainly dealing with a legacy C/C++ and OpenGL codebase. There was no way to get that into a JVM in 2003 and you would either be rewriting your whole app to fit into a browser, or doing some weird server/client thing where your app just sends bits of Java to run in the browser.

The preferred way to get 2D vectors in a browser in 2003 actually wasn't SVG, though - the go-to was Flash. You absolutely could have written a bunch of ActionScript to drive a user interface and draw out vector graphics. But it couldn't do 3D until way later[1]. Also, the file formats involved were extremely proprietary; the only way to generate SWFs dynamically was to wait a year and pay exorbitant sums for Macromedia Flex.

Director and Shockwave were roughly the same thing as Flash but bigger. Notably it had support for native plugins for itself, called Xtras, which could run in the browser. So if you were really, really willing to get invested in the Macromedia ecosystem you could write your own 3D plugin for Shockwave[2].

Of course at that point if your CAD app had any amount of Windows support, then the easiest way to proceed would be to just ship an ActiveX control with your app in it. ActiveX was literally just COM/OLE, but on the web[3]. Assuming, again, that you already had a native app; this would have been the way to go in 2003. A lot of Windows developers handled web integration by just punting to ActiveX. It pissed people like me off, but it worked well enough.

[0] Related note: This is actually how Minecraft started.

[1] By the time it had both 3D and native code cross-compiling, Adobe had this hare-brained scheme to charge a revshare for using both features at once, which singlehandedly pissed off their remaining users and killed the platform faster than you can say "Thoughts on Flash".

[2] Fun fact: a company called The Groove Alliance actually did that; it was called 3D Groove SX. They then realized this was silly and wrote their own standalone ActiveX control called 3D Groove GX.

[3] ...without any sandboxing


I see someone else was watching LGR

For context, the video by Lazy Game Reviews today showed some early snippets of Windows 95 VRML left on an old VAIO laptop: https://youtu.be/wPVIubVtdRY?t=1693

This video was the first I’d heard of the format, so it doesn’t surprise me to see others giving it a search and further share here on HN.


The thing LGR found is called Sony SAPARi: https://en.wikipedia.org/wiki/SAPARi

It actually has an active fan community believe it or not.


I was so excited about VRML in 1996. I thought it was going to take over the world of molecular graphics -- all we would have to do is write molecule-to-VRML translators, rather than writing free-standing OpenGL applications. It's taken a surprisingly long time to get to the point I thought we would be 25 years ago. Three.js turns out to be what everyone was looking for back then.


Yeah. The hardware requirements were brutal though. I had access to SGI's back in college and they were choking on the 3d aspects for whatever reason.

I remember they had VRML to WWW hooks, and navigating that would take you from one VRML world to another VRML on the web kind of like a portal. It was a fascinating way to visualize the web (kinda like an open Metaverse) and I'd like to see a return to that so that one company doesn't rule the Metaverse.


I did a project with VRML around 1999. It was an application for "real time" monitoring of emergency ships, helicopters and other vehicles when they were dispatched to oil platforms in distress within the Gulf of Mexico.

It was a nice toy but rely useless at that time,and a resource hog for computers of that moment. Still a fun project.


FTA, under "Criticism":

> Every time VRML practitioners approach the problem of how to represent space on the screen, they have no focused reason to make any particular trade-off of detail versus rendering speed, or making objects versus making spaces, because VRML isn't for anything except itself. Many times, having a particular, near-term need to solve brings a project's virtues into sharp focus, and gives it enough clarity to live on its own.

>

> Clay Shirky


In particular, he compared it to Quake:

>Quake does something well instead of many things poorly...The VRML community has failed to come up with anything this compelling -- not despite the community's best intentions, but because of them.

I can definitely see parallels between the situation today, where depsite failed attempts at a rebrand of virtual worlds as the "metaverse", the closest to a success in establishing virtual worlds is kids playing in Minecraft and Fortnite.


Don't forget roblox. I wonder if an open source web based roblox could take over the world.

I wonder if other than gaming and "hanging out" there would be any practical applications. Modeling probably.


I once asked Mark Pesche: What about time? What about scripting? "We'll add those later." But you need both from the start. "VR" is not a 3D still life painting.

One of the most common annoying question lots of people asked Mark at the time was "How does VRML relate to XML?" to which the annoying answer was "it doesn't".

All that relentless XML badgering from XML drones eventually drove Mark to the point of publishing a vitriolic anti-XML diatribe in high dudgeon, based on the premise that Microsoft was going in whole hog on XML, therefore XML was Evil, because Microsoft is Evil, so you shouldn't use XML, because Microsoft ruined XML, and you should use VRML, because VRML doesn't use XML, which is the great thing about VRML.

Might as well combine the anti-Microsoft fervor with the anti-XML backlash of the times to promote VRML.

XML certainly is evil, but not because of Microsoft supporting it!

This annoying misconception about VRML using XML may be why they eventually retronymed VRML from "Virtual Reality Markup Language" to "Virtual Reality Modeling Language" at one point. (YAML went through the same "oops" realization).

He also wrote another epic diatribe dramatically comparing himself and the VRML inner cabal to the true elder gods of old mythology, who after creating the universe and starting it going, were finally going to step aside and let the new kingdom of mankind and their lesser imaginary gods take over their own fate and bla bla bla...

I haven't been able to dig up any archives of whatever mailing list I read that stuff on, but if anybody has some of the old school vrml mailing list archives sitting around, I'd love to have a link or a copy -- it was terrific entertainment!


Ya, Pesce was insufferable. After berating a poor woman for "stealing" the generic title of his own unannounced, unpublished VRML book, I helpfully suggested alternate titles. From memory:

My Life as the VRML Poster Child

VRML - It's Neat!

The Utterly Exhaustive VRML Bible


Time and scripting were added pretty quickly — they're in VRML2/97 — but even static 3D models can be incredibly useful.


Replace "VRML" with everything being done on the web3 space.


In 1997 or so, the LEGO website had a Java applet that let you explore a VRML world for their UFO theme, featuring a NASA space center with a UFO in a hangar and a Saturn-style rocket on the launchpad.

As an adult I have never been able to find any information about this :(


I became known as the tech wiz of my school in 1998(7th grade) when I made a little 3d forest scene using VRML for a class project instead of a diorama. It was fun, but even then I had to question what the heck this could possibly be used for.


I worked with VRML in '96. It was fun and interesting. Our company even had a couple reps from SGI demonstrate it on an SGI O2 machine. The two companies at the time who were the main proponents were SGI and Sony, though SGI was clearly the primary commercial force behind VRML. SGI even produced a 3D animated "cartoon" twice a week called "Floops". Then one day in 1997 SGI announced they were no longer supporting VRML, and it was over. I personally walked away from it right then.

Was VRML cool? Yes! Was it clunky? Yes. VRML had all the finesse of XML, but it worked. If you could think in 3D, you could create items fairly easily in VRML. On a Windows 95 machine (good heavens!) using Netscape with a plug-in, moving through a VRML world was not fast or smooth, but it was doable. I created a 3D version of my office to show my managers what was possible, and discussed how we might actually do some things with it.

Yes there are posts here indicating VRML is being used today. For all practical purposes however, it's dead and has been for a quarter century.


Frank Zappa had a VRML website of his recording studio called The Utility Muffin Research Kitchen. My computer in the mid-90s was barely capable of rendering it but I remember being in awe of how cool it was.


This gets an upvote merely for citing Frank Zappa in any way


I remember this being somewhat associated with the rise of Java and the adoption of XML as a serialization format for the Java platform. There was an accompanying trend in which XML became the hammer with which to strike all the illusory nails, VR being one of them. I was always impressed by the degree to which things in the XML and web standards space were over engineered and abstract. It makes me think of that quote in which a famous mathematician (can't remember who, Hilbert maybe?) said that the point of mathematics was to avoid saying anything about anything. So too for a certain kind of engineering except we end up avoiding doing anything.


VRML isn't based on XML - the syntax is all curly brackets and spaces. The name was definitely an attempt to ride the XML hype train though. In a way, VRML was to XML as JavaScript was to Java.


Interesting, I had no idea. I stand corrected on that detail. But yeah, my comment was about the sense of hype I remember about certain technologies as they rose to popularity. I remember there was almost this funny sense that XML would solve everything. And it was connected to this broader sense at the time (owing to the dot com bubble) that technology would solve everything. Really interesting to look back on all that through the lens of time.


I still have my autographed copy of Mark Pesce's 1995 book "VRML - Browsing & Building VRML", subtitled "The definitive resource for VRML technology." I am pretty sure I met Mark and got that autograph at a SIGGRAPH meeting in Boston, MA that year. There was so much excitement and energy at the time.

[edit] I came back thinking, "We have got to take advantage of this." For a little while, there was an effort. The bigger players got involved, 'standards' started shifting, and the realities of our business network sunk in. Sigh.


Pesce is one of my claims to a brush with fame (For a very nerdy definition of fame). We were both panelists at a conference about 12-15 years ago, I was a nobody with good marketing, he was Mark Pesce. We had a great discussion about the dangers of technology taking the humanity out of services. And every now and then we still run into each other and have a meal/drink and another great discussion.


I remember how I 'earned' my autographed copy... by asking 'so, how do you suppose we add interactivity to VRML? Binding with Java Applets? JavaScript?'

The world these days does look quite a bit different than it did in the 1990's... and yet some things aren't.


WebAssembly, WebGPU, and WebXR are the new VRML and will pave the way for an open, platform-agnostic metaverse where sites becomes worlds, hyperlinks becomes portals, profile pictures become avatars, and profiles become personal homes or spaces others can visit.

Interestingly, Meta's CTO just let slip yesterday about a web version of Horizon:

https://www.theverge.com/2022/4/14/23025899/meta-horizon-wor...


In the future, you won't just have to watch an overly long video or scroll three quarters of the way down a story to find a recipe, you'll have to travel through portals to different worlds!

There seems like such an assumption that everybody is aching to decorate new spaces and show off for its own purpose. How often do you visit a space just to check out the space, versus to do something in it? And that's especially true when pushed now by companies thinking it's monetizable, where now you've lost the benefit of not being constrained by real estate and physical material prices. So you've got advantages over video calls for keeping up with people far away, but what else?


> In the future, you won't just have to watch an overly long video or scroll three quarters of the way down a story to find a recipe, you'll have to travel through portals to different worlds!

And then one day Google will create little info cards that are snippets of the recipe VR sites. You'll still have to swoop and fly your way there but it'll be a smaller space.


i honestly want to know what people are smoking . We've had these things for 30 years, with SL and minecraft and VRchat etc. it's a small niche


There was some pressure to get SecondLife open-sourced and federated, but IIRC it didn't really get anywhere.

On the other hand, projects like Croquet/Qwak/Cobalt have been open-source for over a decade; are federated/P2P; already have "hyperlink portals" like parent commented; support existing standards like XMPP, VNC, etc.; have been ported to Javascript; etc. And yet, they seem effectively dead :(


The SL client ("viewer") has been open source for a very long time, and there used to be a thriving ecosystem of third-party forks. Nowadays Firestorm [0] is the one everyone uses.

The server software remains closed source. OpenSim [1], a community-driven reimplementation with federation support, has been around for a while but as you say hasn't really gone anywhere since the original wave of metaverse hype died down.

[0] https://www.firestormviewer.org

[1] http://opensimulator.org


> to get SecondLife open-sourced and federated

That's opensimulator


I think second life would be much bigger now if they didn't milk the longtail for so long. They charged based on costs from 15 years prior, and didn't add ways to easily customize things because it would upset the people making money selling 3d assets.

Both of which are really symptoms of the same overall problem, being unfriendly to new users.


it's quite hard to use that s true, and outdated for today's crowd.

But even the "free version" of SL has minimal traffic: https://opensimworld.com/


This is the bigger list:

http://opensimulator.org/wiki/Grid_List

OsGrid alone has 4600 regions.

https://www.osgrid.org/

It's a true federated system; you can run your own server, you can have portals to other servers, and, for grids which sell things, there are competing payment rails.

Open Simulator has a lot of spaces, but not that many users. Like most federated systems, it's a niche. Works OK, hard to use, few are interested.

Second Life continues to plug along. Right now, 50,844 users are in world. Which is more than any non-game metaverse. (Not sure about Meta; they don't give out numbers, but 20,000 has been mentioned.) MMO games are far bigger; check Steamcharts. The top games are in the millions.

Don't believe any number about a virtual world you can't check from the outside. Concurrent users right now is usually the only honest number. There are systems which claim huge numbers of users, but their definition of "user" is "they, or some bot, put an email into the signup form."

The "metaverse" hype has produced a bit of growth, as people find out that most of the hyped systems are either nonexistent or very low rez. Usage is very low. Decentraland is around 1000-2000 concurrent users. They got up to 2600 once. Cryptovoxels is smaller. So is Sandbox. Sominium Space is in single digits.


that list is horribly outdated - so many of the grids in the list dont exist

Most people these days run their own grid through the dreamgrid installer. that actually increases fragmentation

and then the testing grid , osgrid, is down for multiple days every few weeks, which alienates and frustrates new users . overall activity is down, despite the pandemic. and my experience is that it's very very hard to keep users interested because it is lacking the critical mass of people. It also suffers from very high levels of drama

i find it very hard to believe that decentraland will keep such high numbers for long, it has received enormous coverage in the media eventhough it's bordeline a scam (imho). opensimulator has not received any of this

I dont think metaverses should be compared with mmo games - they are different things and attract nonoverlapping crowds.


Browser VR has been doable for years already. Three.js has been the de factor standard API. I have worked at a bunch of places that genuinely tried really hard with some actual practical, commercial uses for interactive 3D experiences and none of them stuck.


The Wikipedia example shows something kind of JSON-like but originally VRML was like HTML (SGML).

Something similar to that is A-Frame VR which is built on WebVR.

There used to be a way with at least one VR-enabled browser to navigate without exiting VR (if the other site supported VR). Not sure any still support it. I think this may be because it goes against walled-gardens interests. https://github.com/immersive-web/navigation#api-proposal


https://en.wikipedia.org/wiki/X3D seems to be the new vrml.


These are the building blocks on which the meta-verse will be based, not the new vrml itself.


I miss the tooling for VRML. I did a lot of modeling in VRML back then, but gradually all the tools have stopped working. I particularly miss the cosmos editor and browser. It had a fantastic ui, but was lost in buyups by incompetent adobe-wannabes. Also, blaxxun disappeared. I found it a very easy language to prototype 3d in.

Now when I look at a-frame, i see them just reinventing a square wheel. "Oh, it is like vrml, just worse in every way..?". Sigh. Nowadays, i prototype in roblox instead, but its not the same.


An interesting entry that links to this:

https://en.wikipedia.org/wiki/Technopaganism


Yes, Technopaganism was Mark Pesce's and Owen Rowley's "thing". ;) See my previous comment about his talk about elder gods, ya ya ya, etc. Those were some extremely weird years indeed.

I never ran across this delightful Wired article about all that stuff until today:

https://www.wired.com/1995/07/technopagans/

>Wired Magazine: Jul 1, 1995 12:00 PM: Technopagans, by Erik Davis

>May the astral plane be reborn in cyberspace

>"Without the sacred there is no differentiation in space. If we are about to enter cyberspace, the first thing we have to do is plant the divine in it."

>-Mark Pesce

>Mark Pesce is in all ways Wired. Intensely animated and severely caffeinated, with a shaved scalp and thick black glasses, he looks every bit the hip Bay Area technonerd. Having worked in communications for more than a decade, Pesce read William Gibson's breathtaking description of cyberspace as a call to arms, and he's spent the last handful of years bringing Neuromancer's consensual hallucination to life - concocting network technologies, inventing virtual reality gadgets, tweaking the World Wide Web. Long driven to hypermedia environments, the MIT dropout has now designed a way to "perceptualize the Internet" by transforming the Web into a three-dimensional realm navigable by our budding virtual bodies.

>Pesce is also a technopagan, a participant in a small but vital subculture of digital savants who keep one foot in the emerging technosphere and one foot in the wild and woolly world of Paganism. Several decades old, Paganism is an anarchic, earthy, celebratory spiritual movement that attempts to reboot the magic, myths, and gods of Europe's pre-Christian people. Pagans come in many flavors - goddess-worshippers, ceremonial magicians, witches, Radical Fairies. Though hard figures are difficult to find, estimates generally peg their numbers in the US at 100,000 to 300,000. They are almost exclusively white folks drawn from bohemian and middle-class enclaves.

[...]


X3D seems to be the modern continuation of VRML

https://www.web3d.org/x3d/what-x3d


As far as I can see x3d has almost lost out to gltf [0] which is closer to the hardware and thus provides more rendering options [1], though x3d is sometimes seen as a "higher-level abstraction" [2] its questionable whether this makes it any easier to approach. It does allow for things like scripting, audio, video, and interactivity pretty natively and can include gltf within but I'm not sure how many applications really go down this route.

A good example is 3d tiles specification [3][4] which is fast becoming a default for web and is basically gltf wrapped in json.

[0] https://trends.google.com/trends/explore?date=all&q=x3d,gltf

[1] https://realism.com/blog/gltf-x3d-comparison

[2] https://www.web3d.org/blog-integrating-x3d-and-gltf

[3] https://docs.opengeospatial.org/cs/18-053r2/18-053r2.html

[4] https://github.com/CesiumGS/3d-tiles/tree/main/specification


Huh. Hadn't heard about glTF, thanks.

Start with VRML-97's syntax, add a bunch of unnecessary quotes and colons, remove useful stuff like DEF/USE, and voila! you have glTF.

At least it's not X3D.


I played around with VRML a bit around 1999 / 2000 ish. I wrote a tool which parsed the Gliese star catalogue, constructed a Delaunay triangulation, and then plotted the resulting star map in VRML. I also built castles with my then-girlfriend; she did the art direction, and i did the coding. I remember VRML 2's prototypes being very useful for that, because you could define a tower or a wall section once, then stamp out copies of it.


In 2000 I did an internship at a Dutch media company and created a 3D VRML video site. The interesting thing for me at the time was that the server composed the final VRML by combining a template with database results, similar to PHP. I have no idea how common this was, but I was pretty proud of it at the time:) Unfortunately VRML plug-ins where not that great, even worse when I needed them to be able to play real video, so it was never deployed.


You could do the same thing today with A-Frame VR.


Oh yeah, I remember the VRML plugins with their horrendous navigation solutions. POTS Modems were also insufficient to reasonably stream scenes of any complexity. It was a technology about 10 years ahead of its time.

I always considered the successor to be SecondLife. I guess the Metaverse is trying to be the successor to SecondLife, but I think they're trying to solve the wrong problems with it and are not likely to be successful.


One thing I learned in that internship is that 2D is way more useful for navigating most information than 3D environments. Only when the information itself is 3D it becomes more valuable, and even then a projection to 2D might give a better overview.

Games have done Metaverse like 3D multiplayer environments for years now, but always combined with gameplay or construction elements. I don’t really see the point of going to a 3D chat environment if there are no work/entertainment benefits in them that are clearly superior in 3D.


Funny to see this here. More than twenty years ago, I did my Master thesis in Architecture using VRML to display a building I had designed and the surrounding district.

I especially loved the Level-Of-Detail object that allows to design different objects depending on the distance of the object with the view point.

I still have the files somewhere, but last time I checked, I found no software to read them.


Back in 2014 I discovered Go can produce dot files of your dependency trees...

...and graphviz can produce VRML...

Lots of other garbage tweets in the mix, sorry: https://twitter.com/search?q=(from%3Aschmichael)%20until%3A2...


Also: https://en.wikipedia.org/wiki/CyberTown

20 years later we're finally somewhat there with VR Chat and Horizon.


Related - and to add to the comments claiming the superceding standard - OpenXR:

https://wikipedia.org/wiki/OpenXR


Trying to find any live examples or sample code I ended up with this:

https://www.khronos.org/openxr/ => static const XrDebugUtilsMessageTypeFlagsEXT XR_DEBUG_UTILS_MESSAGE_TYPE_CONFORMANCE_BIT_EXT = 0x00000008;

This is conceptually on the other end of VRML and it's not clear if 3D rendering in the browser is even one of its goals


No actually it’s WebGPU, WebAssembly, and WebXR


VRML is one of the various standards I got into for a short time, as it was going to be the next big thing, only to watch it fade into obscurity. This list includes, but not limited to, SMIL for multimedia, XSL transforms for documents, Semantic Web XML name spaces including DublinCore, OPML/RSS/Atom for blogging, XMPP for chat, SOAP for web services. I could go on.

All that knowledge is just sitting there waiting for the day when its needed again. Any day now.


When i was young, one of the only "real" computer books my local public library had was on vrml. I still remember it quite fondly.


Flashpoint (the Flash archival project) has a library of VRML content.

https://bluemaxima.org/flashpoint/platforms/


I played around with VRML in college.

My biggest learning was how loading/rendering times impact overall user experience.

My second biggest learning was in how quickly a new tech can completely vanish and take all your work with it.


So much fun back in the day.


Even more fun: https://threejs.org/


I wrote my own 3D library in JS before that library came out.

I am so looking forward to doing projects with WebGL.


It wasnt fun, it was borderline scam, Web3 of its time.


Metaverse is what VRML tech was aspiring to back then, but standard dial-up internet, early web browsers, and PCs could not deliver immersive realtime 3D. It was obviously decades ahead of its time. I recall needing to install web browser plugin (I tried a few), then the VRML scene file takes a while to download (dial-up) and then draws an ugly 3D scene in the browser. Then the mouse navigation was horrible. No collision detection of course so you could easily fly through the floor, end up upside down, walk through objects, etc. No sound. No chat. No multi-user. At least it had level of detail optimization so processing wasn't on distant details. Then you'd click on a link in the scene and it would either load a different 3D scene, or jump you back out to a plain old 2D web page in the web browser. And of course you'd be looking at this on a plain old 17-inch CRT screen (no VR headset). I see my Oculus collecting dust and I feel VR tech has improved by a lot, but still, meh.


I used VRML for my thesis work where I was visualizing an energy surface in a 6 dimensional space and understanding the bifurcations where it would get holes, start to wrap around the space, break into separate pieces, etc.

I had all sorts of complaints around what VRML couldn’t do at that time, particularly how you’d inevitably get your back turned to what you were supposed to look at, have a hard time turning around, and the people developing the world don’t care.

Looking back with insight from ‘the metaverse’ I see the problem as a lack of storytelling or the facilities for storytelling.

The idea that you’re going to bum around in some virtual world and indulge your narcissism in parallel to how you indulge your narcissism in the real world still appeals to those who want to subject us to ‘experiences’ even if it doesn’t for end users.

I started studying theme park design a few months back because I was interested in seducing people (which you could win at 100% of the time if you had 100% control of the environment.) The metaverse fad came up and it gave me a good mental model of what’s missing in the metaverse…. Storytelling!

Many people in 2022 are inclined to accept you can have fiction without storytelling because we’re so used to fiction that is driven by characters and setting (say Star Wars or My Little Pony Friendship is Magic). Somebody even pointed out to be that those Victorian novels I couldn’t stand in high school were part of this trend.

Some point people will realize that ‘the line is going up’ (NFTs) isn’t a good enough story but until VR designers adopt storytelling ideas from theme parks people will be snoozing.


"storytelling ideas from theme parks"

Videogames call this "environmental storytelling" and it's a really solid way of constructing a narrative in such spaces. The reason why the current or prior generations of "metaverse" technologies don't do this is because they're not trying to be a narrative work. They're trying to be "HTTP for 3D worlds" - in other words, hoping someone else will come in and build the narrative on top of their platform so they can charge the creators a fee to publish their own work.


Trying to be "HTTP for 3D world", yet they're not really protocols, but more something like geocities or MySpace.

If I was going to use the metaverse, I'd want to have a button "New world" that would essentially create a whole new world similar to decentraland, where I could choose the size of this world.

None of the metaverse platforms allow you to create your own world, your own 3d website. Instead they try to cram you into their world, which is made to be pretty small on purpose to keep "land" prices high.


VRML never had facilites to define meaningful interactions comparable to what a real video game engine like the Unreal Engine can do and I think the current "metaverse" platforms also lack that.

A run-of-the-mill game like Sword Art Online: Fatal Bullet has meaningful interactions with NPC characters that are NPC characters in the game. Mario and Luigi: Dream Team has NPC crowds that shower you with admiration.

Capital One (Progressive Insurance, AT&T, ...) establishes an emotional connection with me through actors who plays characters on television commercials, operating themed stores, etc.

They aren't going to move to the metaverse until they can give an experience at that level.

Single player video games succeed at this.

Fiction (Sword Art Online, Ready Player One, The Matrix, Disneyland) tells us clearly what the metaverse is: you share the space with players and NPCs.

The will has to be there but the technology isn't ready yet.


So... that leads into another conceptual objection to "metaverse".

NPC characters are almost always a handful of specific text lines, triggers, and flags cobbled together into something that's "interactable" only within the limited confines of that specific game. The NPC will not work outside of that specific game scenario, and often can be broken within that context.

Furthermore, a huge chunk of the effort to make a good single-player game is specifically the scenario and NPC design, which as mentioned above only works in the context of the other. None of this cost is being accounted for in any of the plans people have for "the metaverse". At best, there's handwaving about running a platform where players sell each other paid mods for an existing game. This is not an adequate system for funding the creation of worlds and items, IMO.

The people pushing for metaverse want something that isn't a videogame, but is wearing the skin of a videogame.


You should have a look at the emerging field of Immersive Analytics [1] which is re-imagining Data Visualization to make use of the new 3D capabilities of hardware and software that did not exist back in the VRML days

[1] https://www.monash.edu/it/hcc/ialab


This is good insight. Storytelling over function.


I wouldn't judge all VRML-enabled software the same, as they were not all terrible. For example, back in the mid-to-late 90s (and for several years in the 00's) there was a free (client) product called OnLive! Traveler (later DigitalSpace Traveler) that utilized VRML 1.0 with a few additional, custom VRML node types.

It was multi-user, it had a wide variety of customizable avatars that could express emotion and featured lip synching to the user's mic audio, it had 3D spatial user and environmental audio, and it all worked really well over dialup (even with 28.8kbps) even on first generation Pentium PCs. Movement was done with "FPS"-style keys (cursors, ALT for strafing, etc.). MTV would host regular live events in-world and there were a number of other large companies that were involved as well.


Did people use to lose all their savings on VRML?


I saw Jaron Lanier speak about VRML at my school in about 1997. It sounded so futuristic and so far out of reach for day-to-day. Then I forgot about it for a few decades and, well, here we are.


I remember seeing vrml sites in 96 or 97 and thinking I was so far behind because I couldn't figure out how to do it.


first saw this in the 90s on an sgi indy.

here's fun blog piece about the launch of sgi's webforce content creation suite which included tools for vrml.

https://therealmccrea.com/tag/vrml/


Virtual Notre Dame fly through as a Windows executable


> It has been superseded by X3D.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: