Yea, we're still in the money raising phase. This should get interesting when investors start looking for an exit. Maybe by then everyone will have forgotten how the SPAC craze turned out.
Of course there are tons of useless PR parasites. Shareholders will demand to see a "response" or a "strategy" or an "answer" or an "AI play". If all you make is toothbrushes, you just have to milk the users the old fashioned ways: hardware purchase, subscription, and data sharing.
That said, I know of ANOTHER toothbrush company that has an "AI play"... :(
If in 10 years all we have are better chat bots and image generators, I’d say it was a bubble, and I don’t see anything that says that’s definitely not the path (though I’m not in the weeds of AI, so maybe it’s just not obvious, yet).
By 2025 the majority of applications will use AI in some way (mostly to allow for sloppy user input), in 5 years there will be no non-AI applications.
For example, in healthcare (because... day job), you will be interacting with an AI as the first step for your visits/appointments, AI will work with you to fill out your forms/history, your chart will be created by AI, your x-ray and lab results will be read by AI first, and your discharge instructions will be created on the fly with AI... etc. etc. etc. This tech is deploying today. Not in a year, today. The only thing that's holding it up is cost and staff training.
You gave examples of how chat bots are going to be more widely used. Nothing more. So far I don’t see any examples that aren’t overpriced efforts in “shoehorning a chat bot” into something.
Like why will a hospital pay for a bunch of chat bot integrations when it’s likely my ChatGPT phone app will be able to view the form via camera and AirDrop or email the form? Meaning, I still see no examples of why OpenAI isn’t the Bitcoin of the crypto bubble (one use case, with one winner).
You say the only things holding it up are:
- Cost
- Training
Which can be said of any business that’s ever existed. So why is AI different?
So instead of just pointing me to non-existent Quicklisp packages, I can have a bot read the junk in my patient chart and hallucinate answers to pressing health care questions? I can't tell if this is a proposal or a threat.
As with previous AI hype waves, real transformational change happening does not contradict there also being excessive hype and a bubble which eventually bursts.
OpenAI are kinda secretive about the inner workings of ChatGPT. It is not accidental in the sense that anything bad happen, but in that they did not do it in purpose.
Because it's very very hard to ensure modern computers dont have storage integrated into some unexpected part. UEFI on board can have a pretty significant amount of storage. All kinds of other devices can have storage in them too. Rather than ensure everything has been erased, they grind up the whole thing.
So I don't believe the NSA are out to get me, if I did I'd want something a bit more thorough. My main "threat model" is someone picks up the drive from the trash and tries to read it.
However you're wrong about this. As we know from when capacitors and batteries leak, the acid or alkali in those can get inside chip packaging by being wicked through the leads, which will destroy them thoroughly from the inside as well.
If i found an SSD that someone incompetently attempted but failed to destroy, It would DEFINITELY make me much more curious about trying to recover data off of it.
The mild phosphoric acid in your cola beverage is nowhere near strong enough to cause corrosion inside of chip packaging, lol. I doubt that it is even enough to remove a significant quantity of the anodic plating on the chip leads to permit the base metal to be attacked.
I don't really get the military part tbh. When I went to army back in my country where it was only mandatory for men (but they always love preaching about equality), many of the men were such slobs that random women would have been much better suited for it.
So, in your words "equality" does not mean "everything is equal" and you mention "military" as an example. Did I get that right? You think that requiring half of the population - purely by gender - to perform forced labor for 12 months... is an example of "equality"? I would say it's a rather blatant example of inequality!
Google can afford to run this model (a bigger one, actually) in their search results when I don't need or want it. Why shouldn't they run this tiny demo backed by their least expensive model? The total cost of running the project for it's entire lifespan is probably less than the value of two weeks of a single engineer's time.
It seems like Google, of all companies, can afford to let prospective developers at least try the thing they want you to pay for. I think they have the dev power to sufficiently rate limit as necessary.
Frankly the way Google has been advertising their AI offerings shows they don't care about the consumer market. I hear great things about their LLMs, but for some reason I never try them, and most people haven't either.
This is a small group within Google doing experiments, they are probably trying a different approach after the MusicFX app experiencing heavy usage and needing to limit users. They didn't have a way to supply your own API key for that one
You can easily try Gemini for free in a number of other places
Ok for context:
You got st3 with a license, and maybe plugins.
St3 got regular updates to new minor versions.
They brought the st4 update like the other minor version updates.
Destroyed the whole st ide setup (plugins didn't work for st4)
And st3 licence can't be used for st4 (you get the st4 licence if you bought st3 some months before)
If I use st for private stuff that would be annoying, but as I use it for my work that was money.
Sorry that happened to you. We've since improved our updater to avoid this happening again, but your loss of trust is understandable. Hopefully downgrading back to ST3 was not too big of a hassle.
ST4 has full backwards compatibility for plugins, keeping the same python 3.3 ST3 shipped with, so I'm surprised your setup broke. ST3 licenses were also fully transitioned to ST4, so any license less than 3 years old covered the initial ST4 release.
Thank you for making such good software and being decent people. Sublime Text is my go-to example of a small team making quality software that puts people first. I'm very grateful.
(I'm working from memory, so I may bungle details)
Speaking personally, I enjoyed Sublime Text, so I was happy to pay for a license back in the day ($70 if I recall?). When v2 came out, I paid to upgrade, and again with v3. When v4 was released, I paid yet again, but after a period of normal use (a year if I recall), I got a popup informing me I'd need to pay again to keep using ST4. I felt like I'd been hoodwinked--they had sneakily switched the license from perpetual to subscription.
I assume that the switch to subscription licensing was disclosed somewhere in the small print that nobody reads. I feel that the concealment was deliberate (I suspect they'll disagree, but you know the old chestnut about the relative volume of actions and words). At that point I'd been using v4 for a year, and rolling back to v3 with my previously perpetual license would be a big hassle, and obviously I'd lose functionality.
I would have happily kept paying for upgrades, but now I doubt I'll ever spend money with those folks again.
We only have subscriptions for businesses (there's a whole separate purchase process), regular licenses cover 3 years worth of updates and are perpetual. That's a similar time period between license purchases to before ST4. We announced this change at the top of our ST4 release post and describe it when you purchase a license.
If your regular license only covered a year of updates that's certainly a bug and I suggest contacting sales@sublimetext.com so we can sort that out.
It doesn't really explain anything besides talking about tokenization on random levels.
You need a certain amount of data to even understand that once upon a time might be a higher level concept.