It is open-source already (MIT)! I just need to make other languages more easily pluggable, and factor out the search engine so that it can be used on its own. :)
Cloudflare continues to add capabilities to their free tier to attract users. Meanwhile, Netlify just ejected all of the free plan users who work off GitHub.
It seems like Chrome is going to make a massive fortune from webmasters or organizations that rely on ads, since it will unblock the ads for them by blocking the ad-blocker extensions. Quite silly, given the fact that Google used to pay "adBlock Pro" a good sum of money to whitelist their ads. Man, Internet Advertising has turned into a huge pile of clusterfuck. Why? disable your adblocker for 5 minutes and visit some websites and you will instantly notice that ads on the internet have gotten way nastier than ever.
Strangely enough, Firefox has held its ground. It fell behind in marketing which is why Chrome (and Safari/Edge) took over, but Firefox still remains popular for being that unshakable browser. I went back to it recently and wondered why I wasn't using it in the last 5 years or so.
I don't know if any marketing could have helped Firefox not lose to the shitty dark practices from the three other browsers you mentioned:
- Chrome: Spam ads for Chrome on all Google services (incl. YouTube and Google Search which probably are the two most visited websites on this planet?)
- Safari: Make it the default and prevent users from using any other browsers (on iOS every browser is Safari under the hood and will reports as such to browser census)
- Edge: Make it suck less and also make it really hard to remove it from your system or switch to a different browser
Defaults matter and, especially when they are this toxic, they are hard to beat :/
I honestly solve that issue by just not visiting those sites.
Sites with ads so obtrusive they interfere with my use of the site don't need my traffic. The web is a hugely redundant datastore and I can just find the info elsewhere.
How does this work? Do you have a fixed list of websites you visit and never stray from? Do you avoid clicking on links you don't recognize? How do you use HN, which by its very essence points you at new, unrecognized web sites every day?
I'm not the same person, but I have a similar philosophy. HN tends not to link to sites with really bad ads. If I find that it happens, I just close the tab. I don't intentionally have a whitelist of websites, but in practice maybe I practically do.
This is why I don’t use an automatic adblocker. I block ads by closing pages that have ads. (And if the information really is quality information, then I reward and support the website.)
> you will instantly notice that ads on the internet have gotten way nastier than ever
Anyone that remembers the advent of pop-ups and pop-unders, never ending cascading popped windows, flashy, shaky, and just terrible ads might beg to differ with you.
I remember those. The interesting thing about those is that browsers went out of their way to add features disabling those, only to then implement the infrastructure for them to be re-implemented in-page in ways that are harder to block.
Brave is fine enough I guess but something about their business model feels slightly skeevy. I'm also skeptical they can afford to keep Manifest v2 support going in their fork. I know nothing about the chromium code base but such a large divergence is going to have ongoing costs.
Not to shill for Firefox but I haven't had issues with it and don't plan on switching anytime soon. Plus no other browser has tree-style tabs.
Brave's "skeevy" business model at least is trying innovative ways to get revenue. Firefox on the other hand get's 90% of its funding from Google, which to me generates a clear conflict of interests.
My understanding is that when the firefox 57 was released (I believe that was the version that dropped the old XUL add-on model) they did do some work to allow the add-on developer to port it to the new model.
Why exactly they still haven't implemented an easy way to hide the redundant top tab bar I can't explain though!
> I don't use it, but Edge has built in vertical tabs fwiw (I know, that's not tree-style).
I've tried Edge's vertical tabs and it's ok but it's a weak implementation of the idea. It only supports a flat list, and I think new tabs also open at the bottom, not even next to the current tab. TST is vastly superior.
Vertical tabs don't really have anything to do with tree-style tabs except a superficial similarity. The entire point is the automatic hierarchical organization.
I think tree-style tabs can be quite useful to those that care to utilize them, but I think the vertical nature is the much better selling point for most people, since its advantages are very easy to grok. I would like to see better integration of addons like it into firefox so it doesn't feel so hacky, though. It would also be nice to be able to toggle between horizontal tabs and vertical ones like in Edge, rather than having to edit css files to get rid of them like you have to in Firefox.
I did use Brave exclusively for a while, until I realized that on mobile, Brave blatantly ignores my provided DNS (pihole). And if there is something I really don't like it's software/hardware which ignores network-wide settings (also looking at you chromecast music...)
So, I switched to Firefox on Mobile and by now I moved my desktop as well, so I have all the same addons etc. on all devices.
I've not noticed this with Brave, but wasn't it FF, out of the mainstream browsers, that pioneered DoH (ignoring local DNS). pihole I think makes the requisite setting to tell FF not to use DoH but users can override it IIRC (not the behaviour I want).
That also enables a hostile network to downgrade from DoH.
Anyway, we're investigating what might have gone wrong. So far we can't find a difference between Chrome and Brave regarding DNS and DoH configuration and configurability. More info welcome, thanks.
Not allowing the user to override it locally would be quite user-hostile! Of all the missteps Mozilla has made with Firefox (in my opinion), giving the user more control isn't one of them.
The network owner is free to allow or deny connections to IP addresses or ports and drop whatever packets they like by configuring their systems, the routers!
They do not get to determine the configuration of _my_ system.
Brave is Chromium-based and will also suffer from the Manifest v3 change. They're just activating the extended enterprise support to delay the inevitable. So no, Brave will not keep those extensions alive.
With that said, Brave's built-in ad blocker should not be affected by the change.
I don't know if this really counters my point, mainly because the tweet this is in reply to was deleted, so there might be more useful context there.
I don't have the link handy since I'm on mobile, but they've stated in a GitHub issue that they're not going to maintain v2 support, just activate the extended enterprise support to give users more runway. This just seems to be a mock-up of a toggle for that extended enterprise support and links to the extensions since they won't be downloadable from the consumer Chrome extension store.
I will admit that they haven't been clear (perhaps intentionally), but I have yet to see an announcement that explicitly states that they will independently maintain v2 support.
I assume Brave will continue to rely on Google's app store for extensions. When Manifest v2 stops being supported, aren't v2-based extensions going to fade away?
It's more of a branch than an actual fork. It's using an automated build process where they add/tweak/enable/disable stuff on top.
The analogy I like to use is they (Chromium-based browsers) are in another lane on the same road whereas a complete fork diverges to their own road. The former is still beholden to whatever direction the Chrome road goes.
Yeah, at some point Google may just straight up remove the Manifest v2 code from Chromium rather than just disable it. At that point it can get harder for Brave. Happened to them with eg. mobile slideshow tabs. Google disabled it, Brave kept the feature alive until Google removed it from the codebase.
I'm not familiar with Brave development, but I assumed they are mostly using the off-the-shelf upstream engine and putting their own browser (chrome, UX, etc.) around it.
Until Google removes the Manifest V2 code from Chromium. Then we'll see how long Brave can hold with such a large divergence of codebases. Will they stop updating their engine and stay stuck ? Cherry-pick each and every patch ?
Feels like it wouldn't be impossible to shoehorn some of the old functionality that extensions rely on into V3 without supporting V2 entirely I imagine?
Then again I'm sure Google went out of their way to make it as hard to do as they possibly can.
Stumbling from one terrible solution to another. A few years later Brave will be the black sheep and shinynewvendor the benevolent knight...
There are other, much cleaner ways to ignore ads... that would satisfy both sides. Client doesn't have to see it, advertiser has the illusion that it was watched. Everyone profits, just like in the old days with TV ads. Yet people fight over this lame manifest fiasco :DDD.
I already switched back to Safari, which is an odd choice for a dev. Firefox seems to be in a malaise and is getting clunky. I simply do not like Chrome. And then there's Safari, okay-ish, but with growing/good integrations (passkey, private relay), and bookmarks synced with phone, and it's backed up by a super-profitable smartphone platform. Done.
I don't think Chrome will actually go through with it. It would drastically push away the nerds (who are the ones what setup stuff for the rest of the family). Nobody cares if the browser is technically better, if it is infested with ads.
>Anyway, Brave seems like a great option to keep extensions alive that can actually block ads/trackers. Chrome is about to can that ability.
You are the one who is uniformed because Chrome extensions still have that ability with manifest v3. The move to manifest v3 is about improving performance, privacy, and security which means that extension authors need to migrate to more thought out APIs. It's canning old APIs, not ad blocking / trackers.
The APIs usable for adblocking in v3 are extremely limited in scope relative to the equivalents in v2. You can make a v3 adblocker, but it can't have all of the features that e.g. uBlock Origin users have come to rely on such as "cosmetic filtering" (blocking page elements, not just URLs).
>uBlock Origin users have come to rely on such as "cosmetic filtering" (blocking page elements, not just URLs).
Manifest v3 doesn't prevent that. uBlock Origin Lite choose to not require broad permission for every single site, but instead it choose to have the user grant it permission on a site by site basis.
It's a stylistic choice, I do it to avoid putting myself as the focus of a sentence. In the OP comment they probably want to emphasise the switching to brave, over the fact that it is them personally doing that switching.
If you'd stated "Don't quite get ...", IMO it would have been no less understandable. Usage progresses.
Libel and defamation are forms of disinformation and have long been held to have civil repercussions. This case against Alex Jones is an extension of that IMHO.
I'm not a lawyer so I could be wrong, but aren't those cases only held up and prosecuted when the victim(s) suffered real financial damages? E.g. lost their job, company went under, etc.
I have yet to see what tangible damages Alex Jones caused the parents of the Sandy Hook incident. Emotional damages to be sure, but I don't believe fining someone because they hurt another's feelings is a good precedent to set.
Several of the parents had to move multiple times. Probably accepting less-favourable prices for their old home in the process, not to mention potential loss of career advancement opportunities when you're having to move house constantly.
Other parents were compelled to hire full time security[2], which can't be cheap.
Anyone who pays out of their own pocket to launch anything on a newly introduced google product moving forward is an idiot.
Google is going to need to dig deep into their wallets to build an ecosystem in any new market in the future.
Then again, Stadia is too early and costing them too much money. More people need fiber internet before it’s practical. Good thing Google killed their fiber internet rollout years ago…
I bought the founder's edition Stadia hardware and I'm thrilled with what I got out of it, personally. I got a free game system for years since they are giving everyone refunds. Played several AAA games I had no access to otherwise. Still going to have the, now free, Chromecast Ultra 4k with ethernet cable power adapter afterward too. It works fine even if I don't pair Stadia controllers with it.
This Ubisoft initiative to transfer licenses to PC is actually worthless to me since my PC doesn't have a GPU capable of playing games anyway and I have no intention to buy one.
Considering more people are reading these headlines than experiencing a round of google freebies for their shutdown, this is only going to work to limit consumer buy-in on whatever google decides it needs to do next.
Google is panically trying to find another unicorn, as it knows its current cash cow, ads, is holding almost all its eggs and could be facing its demise as online regulation only continues to strengthen.
Not to mention google’s perverse promotional incentives that ignore supporting products long term, focusing on launching new things instead.
Google, as it is right now, is a dying company. They need basically a complete overhaul, starting with firing the entire C-level, especially the true CEO, Ruth Porat.
As an aside about GPUs, things have changed a lot over the years. In the early 2000s games were being released that were impossible to run on high settings with current hardware, and your hardware was outdated after 3 months and obsolete after a year.
Today An RX580 (released in 2017) can be had for $100-$200 and is enough to comfortably run any modern game at quite nice settings. It's been a while since I looked as well, it's entirely possible the Ethereum swap is pushing card prices down even lower.
You only need the silly priced cards if you want to do something like play games on maxed out settings in 4k at a locked 120fps.
But consider the cost of power. For many people, especially Europeans, the power bill from running a gaming PC might be substantially higher than that of running a laptop + Stadia or GeForce Now monthly subscription.
Which indicates that something is wrong with the residential electricity markets. Google should not be paying substantially less for marginal electricity than Google’s customers.
(This is a major problem with California’s energy planning. On the one hand, CA (IMO fairly sensibly) wants users to switch from gas to electricity. On the other hand, CA’s electricity prices are so egregiously inflated that people have an economic incentive to switch from electricity to gas.)
Google have some datacenters with dedicated renewable power generation (e.g. Belgium) they set up themselves, so it makes sense sometimes their electricity is cheaper.
I'm not sure about in Europe, but in the U.S. at least a substantial portion of power generation is already from renewables. That doesn't really make it free, or necessarily even cheaper depending on the circumstances.
> Which indicates that something is wrong with the residential electricity markets. Google should not be paying substantially less for marginal electricity than Google’s customers.
Not at all. We want companies to leverage economies of scale. Efficiency should bring competitive advantage. Google often invests in datacenters in places based around where it's cheaper to power and run them, often owning the energy production.
Stadia Pro was 10 EUR. So you're not wrong, though it's easily within the margin of error for this ballpark estimate (e.g. I wish I still had 14 hours a week for gaming; the power draw could be off by a factor of two in either direction, ...). And in winter, using power for computing is just a roundabout way of heating your living area, so in a way, it's free. I wonder if increased electricity costs were a factor in shutting down Stadia.
Your hope is validated! I'm one of the greenest, least energy-consuming people on Earth. I live next to a hydro plant with capacity to spare, don't eat meat, don't own a car, have no drivers license, never fly, and my last holiday I cycled 600km [1].
I got a free game system for years since they are giving everyone refunds.
This will be a common attitude in the future as far as Google's consumer products go. People will buy them expecting them to shut down in a few years, but now they'll also expect a refund.
If you're a Googler who dreams of heading up a project to make something consumer-facing it'll be a lot harder to get buy-in from the board now that it'll cost so much more to shut it down (in dollars if a refund is given or in good will if it isn't) anything that's not a massive success.
Keeping services stable on life support is a muscle Google needs to build if they want to be relied on. Maybe that will happen when there’s an overt price tag on neglecting it.
I'll echo your sentiment. I got to game free for nearly three years and I get to keep any hardware I purchased to do so. I personally grew tired of Stadia due to the lack of public support and negative sentiment any time it was brought up so I'm happy to be getting my money back (especially seeing as some games are providing second licenses on top of refunds) which can go towards a GPU while being given time to finish any games in the coming months.
Yeah, this. My previous company got paid a lot of money to bring our titles to Stadia. And it wasn't worth it, as Google kept changing APIs and interfaces. And then nobody played the game. It didn't happen again, and basically everyone in this industry has been using Stadia as a punchline for years.
No wifi damns the premise completely, wifi is so ubiquitous that it has become synonymous with internet access itself. Cafes and hotels advertise having "wifi", not "internet access". College libraries label ethernet cables as "wifi cables" so that young students understand their purpose. The average laptop on the market doesn't have an ethernet port, nor do any of the low power tablets/phones on which Stadia might otherwise make sense.
Virtually all devices that don't possess an Ethernet port merely need an adapter.
At some point in time we can't fix the entire universe because people don't the know the fuck what they are talking about.
Here's what people want. They want to plug their gig service into their 11 year old cable modem that supports 250Mbps and plug in their 12 year old "g" router which if their service is "good" or "strong" enough it will somehow make radio waves burst through walls like the kool aid man screaming "oh ya" as it drives gigabit speed to all devices on multiple floors of the house simultaneously somehow defying both common sense and the laws of physics.
Meanwhile people will drop $3000 a year on cable TV but not the cost of installing decent hardware. It is perfectly normal for at least some devices sometimes MANY devices with a connection in the neighborhood of 10Mbps with random interruptions.
Good wifi would mean a router per floor with a wired backhaul with additional wireless extenders as needed if you expect to have actual fast internet away from the main router.
Wifi works fine for it, honestly. As noted elsewhere, the biggest problem is latency-based. Using WiFi will probably add ~30-50ms of latency to your setup, which is bad, but also consider that both the PS5 and Xbox Series X have over 100ms of input lag alone. Once you do the math, Stadia over Wifi isn't any less responsive than playing on console. As long as your Wifi connection is half-decent and you aren't playing some Chinese MMO, your experience should be fine.
> Using WiFi will probably add ~30-50ms of latency to your setup, which is bad, but also consider that both the PS5 and Xbox Series X have over 100ms of input lag alone.
If your WiFi connection adds 30-50ms of average latency you seriously need to stop living in a microwave oven or upgrade to something after 802.11b.
With an Intel AX201 WiFi chip connecting to a UniFi6 Lite my average latency is 1ms to the router.
Even a cheap and somewhat compromised WiFi chipset/antenna can do way better than 30-50ms latency. Here's an original Pi Zero W connecting through the same AP hitting the local router:
64 packets transmitted, 64 received, 0% packet loss, time 156ms
rtt min/avg/max/mdev = 2.293/6.699/63.116/7.364 ms
This is while I've got streaming music playing on my phone on WiFi connected to Bluetooth headphones, streaming video on the WiFi TV, and other wireless network traffic happening.
>64 packets transmitted, 64 received, 0% packet loss, time 156ms rtt min/avg/max/mdev = 2.293/6.699/63.116/7.364 ms
A 63ms delay on some packets can totally ruin the user experience even if the average is relatively low... and this is the best a near-optimal (in the real world) setup can do.
If you left your test running longer than 64 packets, I'm sure you would see at least a 100ms max. GP wasn't far-off with their assertion that running on WiFi can easily add 30-50ms of latency. Their wording was a bit loose though.
That 63ms packet was one packet. It's way out of the standard deviation. You don't point to the max and state that's the typical experience of the network.
If we only looked at max values then Amazon isn't 2 day shipping it's like 1 month because some delivery sometime took that long, even though like 95% of deliveries are within two days (just an example, don't know what their actual rates are)
And on top of that, that's with an extremely cheap chip with a compromised antenna inside a network closet with a TCXO literally right on top of the antenna running on 2.4FHz. My 5GHz example with a nice AX chip and decent laptop lid edge antenna never experienced any pings higher than 1ms.
> You don't point to the max and state that's the typical experience of the network.
The test uses 64 packages which is a tiny sample size for streaming use. If we blindly scale that up you would hit the max several dozen times per minute.
> If we only looked at max values then Amazon isn't 2 day shipping it's like 1
A better example: Imagine your tongue just stops moving once every minute and then tries to catch up with what you where doing, imagine this will happen for the next two months. It is just a minor annoyance, right?
An important aspect that I think is overlooked: even if the latency is so subtle that it's overlooked, subconsciously you may have a feeling that something is off, your perception of the product will be worse, you will perhaps become frustrated without even knowing why.
A person I know was diagnosed with tinnitus. And it (the diagnosis) made him happy. For over a year he was frustrated, nervous, but he didn't know why. Turns out he was hearing noise all that time, but just didn't realize it. Sounds (pun not intended) ridiculous, but that's how the mind works.
A relevant ping test should also be sending packets far more often than once per second (more like once per frame at a minimum), and should be sending larger packets than the default for ping.
The previous ping had the Zero W way overloaded, I realized that after. Single core running with a load average of 2.8, still managed to be decently interactive and have the previous pings. This ping was happening while having 1 minute load average of 1.2. So only kind of overloaded ;)
This kind of latency is achievable on a $10 board with only 2.4GHz Wireless N. Suggesting WiFi adds 50ms latency on average is extreme hyperbole.
Again, max ping of 43ms. The worst kind of latency is the unpredictable kind that can't be compensated for. The reaction speed of a casual gamer will be somewhere near 180-250ms. If we take the upper bound, occasionally the casual gamer's input will take 17.2% longer to register.
For gaming, your experience is absolutely degraded by spikes in ping. A consistent 50ms latency is far better than a 5ms with the occasional >50ms latency.
So yes, one packet a day at 50ms would be considered unplayable to you, even if 99.9999999999999% of the packets were <5ms, because it is the "occasional" 50ms latency.
That's what you're arguing when looking at just the max value.
You're dancing around the issue at play though. An application that hitches or glitches for a second every few minutes will quickly become irritating. It doesn't matter if it works 99% of the time. It has to work 100% of the time or its strictly inferior to console or PC.
> 64 packets transmitted, 64 received, 0% packet loss, time 156ms rtt min/avg/max/mdev = 2.293/6.699/63.116/7.364 ms
So if a video frame consists of 64 packets, the whole frame will be delayed by 63.1ms.
Yes there are ways around this (FEC), but getting 1ms 99%ile latency over wifi IRL is nontrivial.
Btw fat-client PCs have much smaller latency spikes, but even those spikes are noticable and make games feel jittery. That's why we compare GPUs based on "1% lows" instead of average FPS. Also see frame pacing.
> Yes there are ways around this (FEC), but getting 1ms 99%ile latency over wifi IRL is nontrivial.
Streaming video is extremely nontrivial to start with. Use FEC when necessary.
Also I just threw a thousand 1400B packets over my wifi and 99% was something like 5ms, max 10ms. 5ms is close enough for me, and I could probably hit 2ms with light effort.
Naw, streaming video is trivial if you have a 1 second latency budget so you can use LL-DASH and CMAF. We hit that target with <100 hours of dev work on a co-watching/cloud-streaming project with only 2 PoPs in the US. WebRTC FEC is a relatively modern and wasn't on the table in 2020.
Btw you almost certainly tested on an unloaded connection so you're not seeing any HOL blocking like a real video stream would. Try playing a 20mbps YouTube video in the background while you ping. Oh and turn on a microwave oven when you're 3 walls (a room and a hallway) away from the AP for extra hilarity on the order of 80ms
A ping result from my WiFi 6 laptop while watching YouTube on the laptop playing through Bluetooth headphones, messing around in an RDP session, a few SSH consoles actively refreshing, chatting on Teams, music streaming over WiFi on the audio receiver, while cooking lunch in my microwave:
Ping statistics for router.home.lan:
Packets: Sent = 793, Received = 793, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 1ms, Maximum = 5ms, Average = 1ms
I meant game control video. You don't get a second of buffer for the player.
And you don't need to use webrtc, or you can add your own FEC on top for your client software.
And yes a 4k YouTube video makes it worse but "don't do that" is a valid response and realtime video won't be so bursty. I can get a very nice ping while more stable bandwidth hogs are running. Microwave is also "don't do that".
There's no 100 ms. Current generation made a huge leap forward. Controller and whole input stack add 2-8 ms. TVs with 120Hz, VRR and low latency mode add 6-10ms in 4k HDR (no idea about DolbyVision HDR).
People's wifi ISN'T half decent though. They use 10 year old hardware stored in a closet to service their two story house then connect all the devices furthest from the router to the 5 ghz network and get 10Mbps intermittent service partying like its 2002 instead of 2022.
By far the bigger issue with wifi is periodic bursts of noise and wifi scanning introducing stutter. Wifi latency is low enough to be a complete non-issue, the only place you'd care is with VR (and even then it's probably fine, but you're much much more sensitive to that stutter, so it's worth doing something special).
30-50ms is ISDN territory. You either have absolutely lousy Wi-Fi (like a soft AP running on ancient hardware) or you’re confusing latency with jitter driven by signal interference and retries (which is also lousy Wi-Fi).
I'm assuming the worst here. I've seen people rocking decade-old Netgear routers that struggle to work with their media center, much less an entire household of devices.
That's a pretty catastrophically high estimate though, you're right.
Hot damn son your WiFi must be poorly configured. I got sites w WiFi aps about the 4th hop after a ptp WiFi link. Adds about 2ms latency the furthest you can get from the main router. Bout half a ms per hop.
Most people would have to buy the cable too, and either run it through their walls, under doors, or restrict their play to whatever room their router is in.
Normal users are not that helpless so as to give up when faced with this problem.
I maintain contact with several non-tech people. When they realized that e.g. Valorant will work better with a cable they very quickly deciphered the USB-to-Ethernet adapters market and asked a father or a friend to run their cable properly. Took each of them 2-3 days and they're much happier casual gamers now.
Where there is a need, a way is found. When the need arises, people learn. Seen it happen several times.
Exactly. Once you decide to route cables in your walls, that can become quite an investment. Not necessarily exclusive, but why not also just invest in a PC then, rather than stop at a half-measure...
In the walls? I’ve seen plenty of places with just lots of wires all about, usually tactfully hidden against walls. Rich people wire things in the wall, everyone else just has cables strung about the place.
Wifi is fine too. I played a good bit of Destiny on it, and the latency was barely noticeable most of the time. I'd only notice it for maybe 2-3 seconds out of a 2 hour play session.
I've experienced that at times, but it was because my router was defective on certain settings. You're not getting blown out by a microwave on 5GHz so there's no reason to have a problem like that except things being broken.
You don't get a big library of games for free to stream, and you don't get to use existing licenses of games you already own, now you have to double dip the full price of a game to play it on Stadia.
It made sense for me -- I own no gaming devices, and my only computers are macbook pros with shoddy graphic cards. There are occasionally games I'd love to play, but I don't want to pay $500+ (ps-latest, xbox X) or $2.5k (computer with $1k graphics card).
Unfortunately, this outcome was easily predicable to anyone who follows google.
Fortunately the companies that understand gaming have similar-but-better solutions that are still perfectly viable. Assassins Creed games on xcloud have been a delight.
The difference is that you aren’t paying a monthly fee, have much greater selection, can download the file to play later (without the Netflix problem of content disappearing due to whatever contract terms are secretly in place), and using Movies Anywhere your purchases are synced to other stores. There’s still a lot of room for improvement there but it seems like you need to pick one model or another, not both.
This is not a very good fit for games, however: everyone sells and watches the same movie and almost nobody watches the same ones for anywhere near as long as gamers play them.
Yes, iTunes introduced buying films and TV in 2003 when the state of the art was an iPod. Very few people still do it and Apple is all in on streaming and has many affordances to seamlessly support first and third party streaming services through various APIs like supporting TV Anywhere globally, supporting third party apps in the Tv app, etc.
Stadia is not too early? Cloud gaming is getting mature now.
I’ve heard people say there is lag but apparently the platforms have optimized them heavily in recent years and I’ve personally never experienced any problems after hundreds of hours on both Xbox cloud and Paperspace, with maxed out graphics.
100-250mbps connection is probably more than enough. And those connections are becoming the standard in the western world.
One of the things hiding behind the gigabit recommendation is that there are lots of terrible ISP decisions that might have been made upstream of you, but that symmetric gigabit providers haven’t made by and large. It’s not actually that you need gigabit, but rather that you need an ISP good enough to offer it.
Back in the day, FiOS offering gigabit at your address was a handy shorthand for being on a more modern ONT, not using DSL to deliver it to your apartment over crappy copper, etc.
I played OnLive like what, 10+ years ago on a trash AT&T DSL connection and it was a great experience. Stadia was not too early. Google is just stupid.
Yup, I played the entire "Last of Us" campaign on Sony's PS Now service in 2015 on cheap Comcast internet on my 2014 MacBook Pro. Display was great and the streaming quality was great! Don't remember if it was 4k; very very unlikely. Perhaps it was 1080p or 720p. Anyways great experience!
Never tried Stadia; _even though_ I got their controller & chrome cast 4k for free in some promo offer! It was just too much work to end up paying full price for games I already owned! Such a bad bad product. The most greatest of technology/engineering! And yet, such a waste!
Google really has fallen in my eyes. As a software engineer, I would be so scared to go there to work on something new.
Yup, I did use it like that. You found a flaw: I shouldn’t have said “never”. I actually did try it for maybe like 5mins and got felt meh about it.
It’s a nice enough controller tbh. It’s just so inconvenient compared to my wireless dualsense controller. Or any other controller: Xbox, dualshock4, steam, backbone controller connected to an iPhone etc.
The stadia controller is really nice in the sense that it has Google assistant built in yadda yadda. Imagine if they made that configurable to control Cortana on windows. Or some form or voice control on windows. It would make using my living room PC that much easier.
I played Stadia over 20mbps Wifi. It was flawless. I finished Tomb raider 1&2 and Metro exodus. Completely forgot I was streaming a game to my laptop after the first 5 minutes.
Google killed fiber rollout because they were forced out by other ISPs...
Issue with Stadia was Linux. Linux choked the entire platform. You had to get developers to port their game. If they used Windows they would have a solid platform with thousands of games. Instead they are at the mercy of "bribing" devs to port their game to Stadia.
Proton is the explanation to the steam deck library. Stadia came out 2019 and probably has been in development for a few years prior. Proton is relatively a recent thing.
The issue is in a low latency environment, translating directx to vulkan may have overhead.
Proton is just pre-built Wine with Valve patches. Wine existed for years and DXVK wasn't Valves' own project. They just sponsored some open source project and driver improvements and integrated it into Steam.
I seriously doubt that if you count every single developer that Valve hired to work on it as well as sponsored 3rd-party developers there likely gonna be 25-30 people or less.
I pretty sure that more people worked on Stadia within Google than all people working in Valve on all of their projects.
Isn't this what ultimately took Toys R' Us down? The company actually had huge revenues, but after paying all it debtors could no longer actually make money.
Renovate is indeed AGPL, but if you're just running it as a CLI, do you think there's anything to "watch out for"? It does not make any project you run it against AGPL, that's for sure.
Also you should be aware that dependabot-core, which dependabot-gitlab wraps, is not technically Open Source at all: https://github.com/dependabot/dependabot-core/blob/main/LICE...
Wrapping a non-open source project in another project which claims to be MIT licensed does not change the underlying license. I'm not a lawyer but question the validity of them doing this without larger disclaimers.
However, I think that it's likely not something to "watch out for" either. Likely both licensing approaches were intended as a way to forbid or discourage competing services and each project welcomes people self-hosting.
In short I don't think that the license of Renovate or Dependabot is likely material for anyone planning to run it for themselves.
Thanks for weighing in, and for drawing attention to the wrapped nature of dependabot-gitlab -- I didn't drill down into their implementation
As for the "watch out," I apologize if that came across as scolding or whatever, but in my company, and likely quite a few others, AGPL software is forbidden. Thus, maybe I have said "be aware" instead of "watch out," so I'll try to choose more neutral advisory language next time
Your "but it's just a CLI" is the nuance of the AGPL that I don't want to pay lawyers to disambiguate since this very thread was about running a GitLab bot, over the network, or in CI which is hosted on runners that connect over the network
Maybe I just need to stay out of these threads and let people do their own license homework, but I certainly do get value when someone else makes me aware so I can dismiss the tooling. No good deed goes unpunished, I guess
I scanned the last couple months of their blog posts and could not find anything to support that claim. The docs or pricing page don’t seem to indicate anything like that either.
The only related thing I found was that free accounts are limited to one user that can trigger deploys in private repos, while anyone can trigger deploys in public repos (assuming I understood the announcement correctly).
Most people got confused with the wording in their recent policy change for the free plan.
They recently changed the policy for Organization-owned repos. The continuous deployment for organization-owned private repos is no longer allowed in the free plan.
But personal account's public and private repos are still part of the free plan.
Government procurement is a broken process today. The railroad companies were reimbursed once the track was set down. We need that for modern projects. Then competent people will get the job done and get paid.
NASA, SLS aside, is doing arguably non-broken procurement. SLS of course costs more than everything else combined.
Very numerous cash-strapped municipal transit projects manage to avoid wasting money. It is well understood how not to waste money when that is actually intended. It is why we can be sure that waste is the whole point on such high-cost projects, better-equipped than others to engage competent management. Ensuring "waste" goes only where intended and not to others must be hard.
I’d be very careful of using the word ‘waste’ as many large programs would never have been passed through Congress in the first place without the ‘waste’ guaranteeing critical votes, such as the SLS. As in the alternative between ‘waste’ would likely be no program instead of some idealized ‘less wasteful’ program.