Hacker Newsnew | past | comments | ask | show | jobs | submit | dale_glass's commentslogin

> Also, the timing of their Nov. 13 announcement is pretty bad. There is already chatter that AI may be a bubble bigger than the dotcom bubble. For a company that doesn't have deep pockets, it would be prudent to take the back seat on this.

Unless Mozilla plans to spend millions on cloud GPUs to train their own models, there seems to be little danger of that. They're just building interfaces to existing weights somebody else developed. Their part of the work is just browser code and not in real danger from any AI bubble.


It could still be at risk as collateral damage. If the AI bubble pops, part of that would be actual costs being transmitted to users, which could lead to dramatically lower usage, which could lead to any AI integration becoming irrelevant. (Though I'd imagine the financial shocks to Mozilla would be much larger than just making some code and design irrelevant, if Mozilla is getting more financially tied to the stock price of AI-related companies?)

But yeah, Mozilla hasn't hinted at training up its own frontier model or anything ridiculous like that. I agree that it's downstream of that stuff.


If they just use 3rd party APIs/models, and AI bubble pops, the amount of users of AI in FF will not change.

The upstream might earn less, and some upstreams might fail, but once they have code switching to competition or local isn't a big deal.

That being said

"This could've been a plugin" - actual AI vendors can absolutely just outcompete FF, nobody gonna change to FF to have slightly better AI integration - and if Google decides to do same they will eat Mozilla lunch yet again


The bubble if any is an investment bubble. If somebody likes using LLMs for summaries, or generating pictures or such things, that's not going anywhere. Stable Diffusion and Llama are sticking around regardless of any economical developments.

So if somebody finds Mozilla's embedded LLM summary functionality useful, they're not going to suddenly change their mind just because some stock crashed.

The main danger I guess would be long term, if things crash at the point where they're almost useful but not quite there. Then Mozilla would be left with a functionality that's not as good as it could be and with little hope of improvement because they build on others' work and don't make their own models.


What's with all this "compressed capacity"? Does that apply anywhere tape is used these days?

I mean, if you're backing up multiples of 36 TB of anything, I would guess that most of it is already compressed.


When you try to compress highly compressed or random data the size expands.

At least on the LTO tape drives I have used, will disable compression if the size is larger in an adaptive way.

As tape read and write speeds depend on data size, it is still worth the effort to try and opportunistically compress data on drive.

As this can usually be done without stopping or slowing the tape, there really isn’t much of a downside.

As for the compressed capacity, that is just 30+ years of marketing conventions, which people just ignore as it has always assumed your data was 2:1 compressible.


2.5:1 now apparently, showing my age because I had to go look because last time I had anything to do with LTO it was still 2:1 - guess they got PiedPiper to update the SLDC spec ;).


Native is iirc 30TB - they quote compressed capacity but eh that's very much going to depend on what you are storing and how compressible it is.

And you'll have a rough idea what it is you are going to be storing and how compressible it is if you spending that kind of money.

It's marketing and a little skeezy to quote it and I bet they have some justification for why they arrived at 2.5:1 compression.

EDIT: Yeah it's 30TB - been many years since I had anything to do with LTO but they use a modified version of LZS called SLDC so it's that that they are assuming will get 2.5:1 on "random enteprise data that isn't already compressed" the 2.5 threw me as well because that used to be 2:1 so either they improved SLDC or thought they could wing it - looks like that switched between LTO-5 and LTO-6.


File compression requires additional storage, memory and processing power. Why bother if the tape appliance already handle it ? Data is unusable in compressed format and is hard to deduplicate. Also, often, there is already compression at the storage array level but the data is decompressed when read.


What data is unusable in compressed format?

Images, videos, music are compressed. RAWs from my camera are compressed too. Even log files tend to be compressed.

What else do people store that would amount to multiples of 30TB and not already have some form of compression?


Databases and their transaction logs, operating system files just to name a couple. Tape backups are not for home-labbers.


LTO is normally used with high-end backup software like Commvault that compresses and dedupes backups before writing them to tape.


Where I think TUIs had a niche GUIs don't quite reproduce is in the very particular way DOS TUIs processed input.

An old school DOS TUI reads keyboard input one character at a time from a buffer, doesn't clear the buffer in between screens, and is ideally laid out such that a good part of the input is guaranteed to be fixed for a given operation. They also were built without mouse usage.

So an operator can hammer out a sequence like "ArrowDown, ArrowDown, ENTER, Y, ENTER, John Smith, ENTER" and even if the system is too slow to keep up with the input, it still works perfectly.

Modern GUIs almost never make this work near as well. You need to reach for the mouse, input during delays gets lost, the UI may not be perfectly predictable, sometimes the UI may even shift around while things are loading. Then also neither does Linux, I find that the user experience on DOS was far better than with ncurses apps that have all kinds of weirdness.


> I've been patiently waiting for their resurgence. Building embedded/mobile devices is their forte

Wouldn't most of those people have gone elsewhere by now? If you're a mobile device superstar, why would you stick around at Nokia once the mobile device part of it crashed and burned?

A company's just a legal structure, people change over time. And that was more than 10 years ago, and didn't they sell their mobile division to Microsoft?


It says it right there:

"Note: The ByteBool, WordBool, and LongBool types exist to provide compatibility with other languages and operating system libraries."


This sometimes also causes problems for the authorities themselves for a change.

I recall some TV program long ago mentioning the police had trouble with Russians because sometimes they think there's a whole gang and it's really just one guy whose name got corrupted in 5 different ways.

Depending on the Russian name and the local language there can be many ways to screw things up. Like Elena might get written down as Helen somewhere and Lena somewhere else. And that's just for viable normal names.


It's not even necessarily corruption; we address each other by different names depending on context.

To acquaintances, I might be a Pavel; to close friends, I might be Pasha. To my mom, I'm Pavlik. In a business or other more formal setting, I would be Pavel Dmitrievich.

I think it's a common complaint when reading Russian novels, non-Russians get confused about who's who because of these types of shifts. And it totally makes sense; at least my various nicknames start with the same letter, but many Russian "short" names don't particularly resemble the full name. Who would expect Aleksandr to be Sasha, if you didn't grow up in the culture?


To be fair, that can be a problem with English too. The short form of Robert is Bob, the short form of Richard is Dick.


There was a hilarious one in Ireland where we were desperatly searching for a prolific polish criminal named "Prawo Jazdy". Which means... driving license.

https://www.joe.ie/news/garda-spent-two-years-searching-for-...


Same in Germany, the „guy“ was wanted for hundreds of offenses


The other logo is a fox: https://knowyourmeme.com/memes/xenia-linuxfox

Kinda wish that one had won, foxes are cooler looking and more marketable.


Foxes are also overused and I consider myself fortunate in not having come across Tux-the-penguin with a 'trans flag' - follow the link if you wonder what I mean - nor do I rue the absence of any furry-like characteristics in the toy penguin. This furry fox seems to be just that, a generic anime-like furry avatar, one out of thousands and as such not memorable.


> Why miss the comments? Is it a video sharing platform or social media?

Because Nebula has a lot of complex content. Things like history, science, making stuff.

And those things have a lot of room for things like the maker messing something up, or struggling with something, or not explaining something properly.

On Youtube if somebody makes an obvious mistake, or is obviously incompetent to an expert, somebody will point it out. If a hobbyist doesn't quite have the skills to do a thing sometimes an expert will show up and help them. If an educative video doesn't include crucial details, somebody will ask.

Like look at say, Inheritance Machining or Alec Steele on Youtube, who take on challenging projects they struggle with and often get advice from expert viewers.

It's weird not to have this on Nebula. On one hand it seems to sell itself as "smart content", on the other hand it's a return to the old TV model of "shut up and consume".


I have to agree, really weird choice not having commentaries


I get why; the comments section can be a terrible place on YouTube so it makes an intuitive amount of sense. But surely the paywall keeps the low quality comments out?


HDMI requires paying license fees. DP is an open standard.


As far as things I care about go, the HDMI Forum’s overt hostility[1] to open-source drivers is the important part, but it would indeed be interesting to know what Intel cared about there.

(Note that some self-described “open” standards are not royalty-free, only RAND-licensed by somebody’s definiton of “R” and “ND”. And some don’t have their text available free of charge, either, let alone have a development process open to all comers. I believe the only thing the phrase “open standard” reliably implies at this point is that access to the text does not require signing an NDA.

DisplayPort in particular is royalty-free—although of course with patents you can never really know—while legal access to the text is gated[2] behind a VESA membership with dues based on the company revenue—I can’t find the official formula, but Wikipedia claims $5k/yr minimum.)

[1] https://hackaday.com/2023/07/11/displayport-a-better-video-i...

[2] https://vesa.org/vesa-standards/


See, the openness is one reason I'd lean towards Intel ARC. They literally provide programming manuals for Alchemist, which you could use to implement your own card driver. Far more complete and less whack than dealing with AMD's AtomBIOS.

As someone who has toyed with OS development, including a working NVMe driver, that's not to be underestimated. I mean, it's an absurd idea, graphics is insanely complex. But documentation makes it theoretically possible... a simple framebuffer and 2d acceleration for each screen might be genuinely doable.

https://www.x.org/docs/intel/ACM/


I'm not 100% sure but last time I looked it wasn't openly available anymore - it may still royalty free but when I tried to download the specification the site said you had to be a member of VESA now to download the standard (it is still possible to find earlier versions openly).


What about the B60, with the 24GB VRAM?

Also, do these support SR-IOV, as in handing slices of the GPU to virtual machines?


SR-IOV is allegedly coming in the future (just like the b60).


It’s sort of out there but being scalped by AIBs.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: