Hacker News new | past | comments | ask | show | jobs | submit login
Android's new Bluetooth stack rewrite (Gabeldorsh) is written with Rust (googlesource.com)
635 points by nicoburns on March 31, 2021 | hide | past | favorite | 377 comments



I know the guy that heads up the team that did this work -- he and I spent 2+ years fighting Broadcom's old, god-awful bluetooth code. Our whole team used to play what-if games about replacing the thing while massive code dumps came in from vendors, making the task ever larger.

Zach, if you're reading this, HUGE kudos to holding the line in replacing that, and double kudos for doing it in a verifiable, sane language!


> fighting Broadcom's old, god-awful bluetooth code

Correction: god-awful host side bluetooth code.

There is still the bluetooth firmware residing on the BCMxxx chip (or Qualcomm chip) - >1MB of god-awfulerer closed-source code, half of it is in ROM (with limited number of available patch slots), full of bugs. You can see it crash from time to time in the kernel debug logs (and auto-restart itself)


On Glass, we actually went to Broadcom and had them on the ropes for fixing parts of their firmware. Sadly, we couldn't bring those fixes out of the closed source world of hardware, so it's still up to the system integrator to fight those battles...


Serious question, why doesn't Google build its own bluetooth & BLE chips? Please put some competition on Broadcoam and the like and either push them entirely out of the market (good riddance), or force them to step up their game.


I can't speak to this directly because of numerous reasons, chiefly among them being that I don't get to make those decisions. Standard disclaimer follows: I rejoined in the last two years, what follows are my opinions, these opinions are my own, blah blah.

Android has never been about driving the hardware narrative -- it's always been about building a phone with mostly open contributions and driving the start of a wedge to open up the phone industry a bit. It's always been a software answer to a hardware problem, even today. Prior to Android, all we had were closed source low powered feature phones and Blackberries.

That being said, building silicon is non trivial work, and building a BLE stack and controller is even more so. Will a solid BLE stack sell phones? Hard to say how it could drive that narrative, realistically, and even harder to say if such a controller could be made cost effectively. Given Android's archetype (software solution to closed hardware), this puts such a project into a much more difficult position politically and financially.

I can't see this kind of thing having much in the way of legs in a large corp. That being said, I do think if a startup could challenge this landscape, it is a HUGE opportunity.


    Prior to Android, all we had were closed source low 
    powered feature phones and Blackberries.
I...what? Even if we want to ignore the iPhone for whatever reason, the Palm Treo, Nokia N900, and Windows Phone were firmly established by the time Android started getting demoed--and that was the variant that was very much reminiscent of Windows Phone, with a strong emphasis on the cursor keys over a touchscreen.

The rest of your comment makes sense, but the cognitive dissonance of that sentence was so extreme I had to respond.


I missed a comma in that: closed source phones, low power feature phones, and Blackberries. The N900 is a different, rare breed that did not see widespread adoption.

Fun fact: the first Android phone was the sooner, and it looked and behaved much like a blackberry. Still have mine, in fact.


I loved my Symbian E62


Yeah from the Android platform side it would be weird to build chips. For products like the Pixel phone though, that would be a great place to innovate. And realistically, Google needs to get into the custom chip game sooner rather than later... GCE needs to start competing with Amazon's Graviton ARM processors. The sooner you get the expertise and talent to churn out chips (and maybe even a fab or two?), the better. The global shortage for fab usage and chips could kind of force your hand soon.


Google doesn't have the volume on its first-party hardware to drive something like this yet & they would never start with BLE chips. Additionally, there's some really good vendors out of China already challenging BCOM & QCOM FWIW, which further complicates the "build it yourself" narrative (look at the Pixel buds which have an AirPods-like experience running a Chinese BLE chip for the buds which other Western vendors weren't able to match).

The Android org generally isn't the right org to build hardware, let alone do chip design. Maybe the camera team got closest when I worked there?

Source: worked on Pixel Buds @ Google & was one of several engineers responsible for selecting the chip vendor. We got source access to the entire stack/OS except for the microcode & some parts of the stack they hid. I found BES a way better partner to work with than the BCOM/QCOM mess.


Are you talking about the 1st or 2nd Gen Buds? Because the 2nd gens seem to have severe connection issues:

a) they frequently flip flop between at least two BT profiles, leading to a lackluster listening experience b) they lose connection to each other (leading to intermittend profile switches during reconnects, possibly due to lack of available bandwidth) c) they have severe signal quality issues, leading to limited range and audio interruptions

All these issues only happen with the 2nd Gen Pixel Buds, but not with a random sample of various other true wireless earbuds (I tested Sony WF-1000XM3, 1More True Wireless ANC, Airpods Pro) leading me to believe that this must be a hardware issue on Google's side, especially because the amount of people with the same issues is pretty high.

No other pair of true wireless earbuds hat any issues with music playback while I'm on a road bike, with my phone being on my back, only the Pixel Buds do - and I'm on my 2nd replacement already.


Aren’t they already with TPUs?


Huge difference between designing for their own data centers vs consumer chips.


Yes. And several generations in.


> Prior to Android, all we had were closed source low powered feature phones and Blackberries.

Symbian was open source, ran on millions of smartphones - most of which had app stores and web browsers. Some of which had touchscreens, GPS, augmented reality features etc.

Don't get me wrong - Android has been brilliant. But let's not completely rewrite history, eh?


Right. I didn't think I was trying to rewrite history -- I'm simply pointing out a gap in the market that Android was attempting to fill. Asking me to enumerate everything out there at the time is silly -- especially given the haze of memory.

Symbian was far more popular in Europe than it ever was Stateside, so bear that in mind. I had to import my Nokia E70 gullwing phone before I received my sooner, and what functionality it had was okay, but the browser was hardly more than a WAP browser in a feature phone. The app store was barely there as well, including only a handful of very simple apps at the time.


The browser in S60 phones was actually WebKit. In fact it was Nokia that started the work of fitting WebKit into memory-constrained devices.


Oh, neat! I had no idea it was WebKit on there. Still, what I had wasn't exactly what you'd be comfortable using for more than a few moments. I bought one of the original Nokia Internet Tablets and put together a full wearable system back then to make things a bit better for myself, but I never used the browser in S60 for anything serious because it was so cut down.


Google throws money at problems which don't generate revenue all the time. I feel like all it takes is someone inside Google with enough leverage to push it through without the thing having to make business sense


That could work but the other side of that coin is: if it stops making financial sense Google will do the responsible thing and end it. "Your BT hardware's software is no longer supported" will do more damage to the problem they're trying to solve here


I’ve already had Android phones that didn’t get more than a single major update where the BT/WiFi blobs were locked to some ancient kernel.


tbh, Android consistently having a good bluetooth experience would probably be good for the platform.


> Android has never been about driving the hardware narrative

Apple has been building its own hardware from the beginning, but still also uses Broadcom chips.


They have been integrating hardware from the beginning, not making it all themselves. What they do, though, is demand the ability to vet and fix vendor firmware.


>Apple has been building its own hardware from the beginning

do you mean designing? I'm not familiar with any point in time where Apple was building phones, but maybe i'm mistaken.

A quick google search indicates that even the first generation phones were built by Hon Hai.


They acquired Intel's modem division, so that may change in the future as well.


[flagged]


> What a hopelessly naive perspective.

Maybe, but I was there, helping it develop in the early days, which is the time span we're talking about. I can tell you the core of the Android team was fighting to keep it open -- so much so that about three years in one of the guys who dedicated his job to open sourcing drivers and kernel patches burnt out because of it.

What Android is about is decidedly not what Play Services and GCM core are about.

> Try seeing how willing they are to add support for chipsets on phones that don't include Google Services

It's always been the system integrator's job to work with the OEM vendors to integrate drivers and functionality into Android -- not the Android team's. Even on Glass we had to do this as though we were outside system integrators.


too bad more and more of android was moved into play services over time


It wasn't. Play services is just add on features that proxy app permissions and centralize push notifications -- no actual Android features moved into Play services that I know of. Ie: auto filling SMS OTP codes. It feels like Android is being sucked up that way, but that's because there are a lot of really nice features in there, like push notifications, geofencing, etc.

You can still run Android without GMS core and Play services -- I do that, myself, on a Pixel 3a running Graphene. The trick is that you lose some nice functionality (which Android actually makes up for in some cases, ie: SMS OTP copy buttons), and the mapping experience is god-awful (mostly because the OSS mapping scene is hopelessly stuck in the 1990s GPS model)


one example i can think of without having to look things up is that the music player used to be part of AOSP and then got replaced with a google play variant

e: here's an article from 2018 with some more examples:

https://arstechnica.com/gadgets/2018/07/googles-iron-grip-on...


So the old music player isn't still included by default. It should still work just fine on modern android if you install it, and there's plenty of foss media players on fdroid (ie not Google-dependent), most of them thin wrappers around the android media apis with a few extras like playlists. Kodi and VLC should work fine without Google services, and LineageOS Eleven doesn't require Google, though I'm not sure if it still works on modern Android.

Lack of media playback options shouldn't be able to stop you from using AOSP.


What would get open-source mapping out of this "1990s GPS model"?


Stop geocoding by separating out addresses into singular fields, for one. Stop showing mostly irrelevant contours on maps, for another. OsmAnd~ is unfortunately the only game in town, and the UX is freaking terrible.


how do i get SMS OTP copy buttons on my google-free android device?


> Serious question, why doesn't Google build its own bluetooth & BLE chips?

There is a lot of overlap between WiFi and Bluetooth & BLE such that you just have a Wireless chip that does both. In fact I think with Bluetooth 4 the file transfer profile just establishes an adhoc WiFi network. You don't have separate Bluetooth and WiFi chips anymore.

Furthering that, in the mobile space the Wireless capabilities are usually integrated into the mobile SoC. So you don't even have separate chips for CPU and Wireless.

About the only time you see a separate Wireless chip is when a new technology is emerging like 5G and it's usually only an external chip for a generation or two until it can be integrated into the SoC.

So, if you were to design your own Bluetooth chip today you would also be designing a WiFi chip and then you'd probably just roll all that into a SoC with a CPU. No small feat.


> In fact I think with Bluetooth 4 the file transfer profile just establishes an adhoc WiFi network.

This was a feature of Bluetooth 3.0, but almost nothing ever used it. I was once at a big BT testing company and asked about it, and they had like one device that could do it (a crazy feature-packed HTC WinMo device I think).

And then Bluetooth 4.0 added BLE, and it seems like there hasn't been much development of classic BT since then.


Building radio ASICs is a world rife with patents. Pretty much nobody new can enter the market.


Aren't they considered "essential patents" that must be made available at a reasonable price?


Generally yes. But you would need to spend 3 years in court to get that to work.


Follow-on question for the sake of debate: what if you got a bunch of smaller wannabe startups and interests backing a group effort to do exactly this for a pool of things? Would the legal fees scale linearly? :/


Care to expand? Is that why Software Define Radio is still so much a niche and expensive?


Nearly all modern radio chipsets are mostly software defined. That includes WiFi, LTE and GPS.

The radio frontend is typically a downmixer and then straight into digital.

Some of the typically "software" bits like FFT's, various encodings, checksums, clock recovery etc. are frequently done in digital hardware acceleration blocks for performance, and saving power. If you were writing the firmware of the device, you needn't use them though.

With enough human years of effort, you could take almost any radio hardware for sale today and repurpose it to speak nearly any other radio protocol in similar frequency bands. Performance will probably be terrible though!

It's rare people do this though - all the chips don't have their firmware documented (again mostly to avoid publishing documentation that proves they are violating someone elses patents), and many have various cryptographic elements that makes reverse engineering hard.

The one exception to this is WiFi chips used in the Nexus 5 by Broadcom, which has had a reasonable amount of reverse engineering because Broadcom accidentally published the source code because the firmware code was in part shared with published Linux kernel driver source code.


> all the chips don't have their firmware documented (again mostly to avoid publishing documentation that proves they are violating someone elses patents)

:O

Enlightenment moment :(


There are a gazillion patents on radio related stuff. Nearly all standards have associated patents.

Companies already in the industry typically have cross-licensing agreements - ie. I can use your patents if you can use mine. Either that or they just violate each others patents knowing that a patent war would be mutual destruction and in neither companies interests.

But a newcomer has nothing to offer - the minute they release any product, every incumbent company will go through their patent portfolio and sue them out of the water.


Yikes, that sounds pretty harsh. So how would a new player be able do enter?

I thought only the hardware could be patented, not the software, and so SDR would level the playing field, but that's perhaps too naive?


Google's involvement in any kind of hardware is already a distraction from their core product line. Making their own chips is a distraction on a distraction.

Distraction might not be the right word but I can't conjure up the right one.

Hardware is one of the end goals for Apple, for example. For Google, Android hardware is not. It's just there to serve their goal of selling ads.


IMHO those dynamics are changing. Custom chips are becoming table stakes for new products. Look at Apple M1, W1, or Amazaon's Graviton2 chips. All of those are core parts of products and services which would not be possible without the custom silicon. The pool of talent and resources to build these things is extremely small, and there are geopolitical issues putting ever more pressure and scarcity on them (i.e. China pivoting to reduce dependency on Western designed chips and hoovering up as much chip design talent as possible). The TL;DR is amazing new hardware and services need custom silicon, but custom silicon is only getting more difficult and more challenging to build in the near future.


Getting good yield/performance on the radio is non trivial. Aside from having domain experts you have a pretty significant investment in equipment to do the testing. Broadcom and other semis split that cost over many customers.


Tangential - in the BLE land nrf52832 and friends are pretty neat, and there is an ongoing work on the rust stack: https://github.com/jonas-schievink/rubble

I tried it and it can do a few basic things, though it’s in the early stages, apparently.


Nrf52 are great BLE chips, however the full bluetooth spec (classic+BLE) is orders of magnitude more complex...


Even if some courageous developer there fixes the bugs and updates the firmware; how many end-users would actually receive the update and actually apply the update? That's the problem Linux's LVFS[1] solves but it's unfortunate that not all manufacturers support it.

I got update for my half a decade old Logitech's 2.4 GHz receiver (nRF24L) for wireless keyboard as soon as I plugged it on Linux, I've used the same keyboard on Mac and the official Logitech software doesn't even detect the device properly let alone update the receiver's firmware(no issues using the device though).

[1] https://fwupd.org/


Do you have any more info about ROM patch slots? I have never heard of this before. I assume this is a small amount of r/w memory that is somehow overlaid over specific locations in the ROM?


Correct. It's a small table of:

  address1, 4 bytes overlay data
  address2, 4 bytes overlay data
  etc
The data is overlayed over the specified addresses, in runtime. On some chips its 8 bytes instead of 4. On a typical Broadcom/Cypress chip you have 128 or 256 entries.

By the time the chip is 2-3 years in the market and still getting firmware updates, ~98% of them are used by existing firmware, so there are only 5-10 free entries by the time the chip is considered "obsolete".

Case in point: the Broadcom/Cypress BCM43455 chip on the raspberry pi is almost out of patch entries. Broadcom have switched to their usual tactic of stalling for years on known, reproducible bug reports.


> Case in point: the Broadcom/Cypress BCM43455 chip on the raspberry pi is almost out of patch entries. Broadcom have switched to their usual tactic of stalling for years on known, reproducible bug reports.

And it's still really buggy. I had to write a service on the RPI and the only way to reliably connect was to restart bluetooth before every attempt.

That kind of fix makes a person feel dirty.


Sadly that's common in the hardware world.

Step 1. Have a reliable hardware watchdog that restarts everytime there's a software problem.

Step 2. There is no step 2.


Such is the sad world of Bluetooth. The dirty secret to this industry is that this, while seeming hacky, is the bare minimum de-facto standard in most cases.


So given these data points, isn’t it reasonable for Apple to refuse to play along this broken tune and just roll out their own dialect of a wireless protocol? Why, if not in the name of scarcely affirmed “standards”, drag suppliers through an endless contractual game, when you can direct your own capacity toward the quality standards that fit you?


I don’t follow the leap. The grandparent’s point was about the quality and terrible lack of long term support of Broadcom chips. How does that translate to issues of the standard itself?

Nobody would complaining about Apple creating their own radio chips (which they seem to plan for 5G/6G). Apple creating their own standard protocols is an issue though.


Well, if the implementations of the standard are such a garbage fire what's the point chasing them? Just to check the box "standards compliant" and likely providing an abysmal UX and poor interoperability?

I fixated on Apple because they're often picked on for taking the highway, but on the other hand what's the point doing otherwise? What's a common ground if it's just a pipe dream?


Just because a chip is shitty doesn't mean it's worthless. In practice, Bluetooth is quite interoperable, and reliable enough for many use cases (especially the common, better-tested ones).

Breaking compatibility with that ecosystem out of spite is not conducive to getting adoption for a better product.


Well, the original posts report some frankly tragic scenarios - so bad that they “reboot to initialize” just to keep sane - in what are some pretty ubiquitous devices. Or not?


"Reboot to initialize" is ugly as hell and very brittle, but it's good enough for most I/O devices like keyboards or headphones. If the kernel is able to properly reinitialize the chip with all of its old association information, it might even be indistinguishable from a few hundred ms of interference. (Rebooting on errors is in fact quite common for all kinds of hardware in high-radiation environments, and is a pretty standard kernel technique for working around buggy hardware.)

Now, of course, multiple nines of uptime would be very nice to have (and open up new use-cases), but 2-3 nines is still a lot better than 0.


Fine, but you're rationalizing living with poor execution. I'm rationalizing with deep (perhaps goldplating) correctness.


I'm not "rationalizing", and neither are you.

You're arguing that it's not enough to make a correct implementation, but that it's also important to break compatibility with incorrect implementations.

I'm arguing that a better implementation that is compatible with current protocols is strictly better than a better implementation that is not Bluetooth-compatible.

If you make a piece of hardware that is good (e.g. doesn't randomly crash and need to be rebooted by the kernel), why is it a bad thing for it to try to connect to some flaky BT headset?


The real brokenness here seems to be that the chips are not engineered with say ten times the patch capacity.

And the root of the brokenness is that there isn't the end-to-end awareness and acceptance that ten times the capacity is obviously needed.

Owww.


Easy to say, and I can’t know for sure exactly what factored impacted Broadcom’s decisions here, but I can tell you that chip manufacturers are under extreme pressure to keep costs down, which means that they may under-spec systems at times. Also, with the long design cycles involved in chip design the patch capabilities may have been decided years in advance, before realizing how much would be needed.

In general I agree with your comment, though it’s a lot easier to say this in hindsight.


For more illustrations of bugs/vulns in the firmware: https://www.youtube.com/watch?v=7tIQjPjjJQc&t=1986s


Broadcom's firmware seems to be just absolutely terrible across the stack, NIC included. They seem to have solid product design / engineering chops, but firmware just defies them.


My question would be what were the motivations to move into a stack written in Rust, if the much of the bugs are in the closed source FW running in the peripheral?

What would happen if consistency is lost outside the Rust domain but still in the BT stack?


I really hope it will fare better than NewBlue and BlueDroid... https://www.androidpolice.com/2020/09/14/the-rise-and-fall-o...


NewBlue died because I quit Google. I started that project and was its main motive force.


> NewBlue died because I quit Google. I started that project and was its main motive force.

Unclear why you're being downvoted here -- I seem to remember you, and I definitely remember hearing about NewBlue while we were working on Bluedroid. At the time it wasn't clear what happened. When did you leave?


I am also not sure about the downvotes, but c'est la vie. I was dmitrygr@, i left apr 2019


Bluedroid is the old stack. Zach and co started the floride project to clean it up, but it seems GD is their attempt to totally rewrite it.


Any insight into why Apple seems to have a much better bluetooth stack? I do a lot of BLE development that has to work cross platform, and we see constant GATT connection issues on android compared to almost none on ios.


Apple has a unique relationship to its hardware, and the money to bend vendor's firmware to their will. Half of the problem with Bluetooth is the host stack. The other half is the firmware running on the controller on the other side of HCI. If the controller ever gets screwed up, the host stack can only disconnect from HCI and totally restart the controller.


The third half is the bluetooth firmware on the other device. The fourth half is the other firmwares on the other device. The fifth half is the specification(s). The sixth half is the RF environment.


Haha -- yes, indeed. Bluetooth the spec and implementation are an absolute dumpster fire.

"Bluetooth is a layer cake of sadness" is the turn of phrase we used for a while on the Glass connectivity team. One of our project founders, Thad Starner actually apologized to me for the mess it became; apparently it was supposed to be simpler, but when Nokia took ownership back in the 90s, it started to go downhill.

Our lead on the connectivity team at the time had a crazy idea to rewrite the spec using only ACL and L2CAP, but never really went anywhere with it because of the the Glass org implosion.


But those are the same for iOS, yet Bluetooth on iOS is far better than on Android.


A huge factor is that iPhone has a large, long-standing market share, with relatively few different OS and hardware versions. This means that everyone else has been able to test their Bluetooth implementations for interoperability with iPhones.


A lot of people using Apple Bluetooth hosts use them with Apple Bluetooth devices (other hosts, mice, keyboard, headphones, etc)


Anecdotal, but my new 16 inch MBP drops BT connections all the time. I had to give up the keyboard and mouse I've used for years across at least 5 computers, mostly Mac's because they disconnected so much. Even the Apple keyboard and mouse I switched to drop occasionally.


FWIW, I also have a 16" MBP and I've also had a ton of problems with Bluetooth on this thing.

I've noticed that Bluetooth connectivity is significantly worse when the laptop is closed. You might see if keeping it open helps.

Resetting the Bluetooth module also helped resolve some persistent connectivity problems I was having (shift+option+click on the Bluetooth menubar item; choose "reset" from the menu).

Eventually this thing started having kernel panics every time I plugged something into a USB-C port and I had to send it back for replacement. Not a great experience.


Wow, never knew about that combination of keys and the fact that it brings extended options in the drop down menu. Thank you!


the 16 inch mbp is full of bugs, so it's probably a problem with the device (typing with one and still crying because of https://forums.macrumors.com/threads/16-is-hot-noisy-with-an...)


This HN thread from yesterday may help!

https://news.ycombinator.com/item?id=26625356

tl;dr Using USB3 ports can cause Bluetooth dropouts on Macs (and lots of other machines).


I think there might be some issue on that hardware where 2.4ghz wifi and bluetooth share an antenna, and can interfere with each other? If you're using 2.4ghz and can use 5, give that a try.


Honestly sounds like faulty hardware or interference.


There's something odd about the 2019 MBP Bluetooth hardware. The most interesting part is that just plugging in a Cambridge Silicon -based USB dongle can kill it permanently: https://discussions.apple.com/thread/250944058


Doesn't kill it permanently; there's a fix involving connecting a CSR 2.0 dongle. There's also a workaround that sets an nvram var. Overall it's an absolute clusterfuck, though, because Apple still haven't fixed the problem.


The workaround with nvram doesn't work if you already made the mistake of plugging in the CSR dongle. I don't know about the fix with CSR 2.0 dongle, it seems that nobody's selling them any more. Could work, could be an urban legend like all the reported fixes with nvram and pram resets and restoring various files.


I can confirm the 2.0 BT dongle fix works because I had that exact issue and that's how I fixed it. Of course, I still made Apple replace the laptop (was less than 2 weeks old), but doesn't really seem like they've noticed seeing how it's still not patched almost a year later.


My guess would be that they can apply much more pressure to the Bluetooth chip vendor. The buying power that Apple has for designing in a particular chip is much bigger than any individual Android manufacturer (even Samsung). That gets them the leverage to get the chip vendor to do what they want.


> The buying power that Apple has for designing in a particular chip is much bigger than any individual Android manufacturer (even Samsung)

Why does Samsung have smaller buying power than Apple? Doesn't Samsung sell more phones than them? Or is it because while Samsung sells more phones in total, Apple still has the most successful single models?


A typical manufacturer approach is to beat the hell out of your suppliers on price to make your margin which is likely most of what Samsung does with its buying power. Apple does that but is also willing to throw money at issues and go as far as financing facilities for vendors. Where most manufacturers don't care about the device they sold the second it's out of warranty[1], Apple takes a longer term view of the relationship. So Apple is far more likely to say 'we need this fixed and are willing to provide X million dollars and a team of our own engineers to solve the problem' while Samsung probably goes more like 'make cheaper and make it better... and make it cheaper!'

So the reason Samsung typically has less influence is that when all you do is crush your suppliers margins to make your own, said suppliers don't tend to make much of an investment in making things better since they are incentivized to just make them cheaper.

[1] in fairness, they can't afford to: while Apple has an ongoing revenue stream from its devices, most other manufacturers don't. It's Google/Facebook/etc who monetize the devices post-sale while for the original device manufacturer it's merely a liability at that point. This is a factor in why Android has a rather dismal track record re: updates on older devices.


It's because Samsung can get away with selling total garbage. I work on an audio-related app, and a huge majority of the hacks in the app to work around bugs in phones are for different Samsung models. Still more than half of our users keep buying them.


This, but slightly revised: it's because Samsung is in the market of selling cheap hardware. Apple sells luxury products and part of their value-add is leverage with their vendors.

Apple can come to a vendor and say "these are our constraints on what we are willing to buy. Here's testing benchmarks. You are required to meet them. Failure to do so voids the purchase contract."

The vendor will then, of course, say "We'll have to charge you more if we're spinning up a test infrastructure we don't have."

And then Apple will negotiate a price point and pass the anti-savings onto the consumer.

They do this at multiple levels at multiple points in their hardware story. I met someone once who worked on the USB integration specs back in the first-couple-generation iBooks. Apple built a rig for physically testing the connectors (including multiple cycles of intentional mis-plug, i.e. flipping a USB-A connector over wrong-side-up and pushing hard against the socket). They told the vendors selling them USB sockets that the sockets had to survive the rig, or the sale was void. Vendors priced accordingly. But the resulting product is more robust than others on the market.


My social network orbits around several people who experience apple’s testing rigor- and the stories I’ve heard aligns with the parent comment.

Apple sounds like it really is that rigorous with the quality of hardware and drivers/firmware they use.

Some of the stories of nitpicking that I’ve heard are truly awe inspiring. On the other hand, I’d hate to be the engineer on the vendor side trying to please apple. (Mostly because some crap PM or sales person made promises without consulting engineering on what’s possible with the available time and resources)


> Samsung is in the market of selling cheap hardware

I'm not sure about this. There are people who pay >1000€ for their flagship phones and believe them to be premium products, but they are just as buggy as the cheap ones. Huge amount of CPU power and impressive camera specs, though.


I've never had one, can you expound more on this topic?

The S21 Ultra seems like a very good purchase from reviews. QHD screen with dynamically adjusted 120Hz seems really like the best spec.


IIRC that model doesn't give apps audio route change notifications when Bluetooth is disconnected, which screws up audio timing. Or something similar, each model has different bugs. But yeah, it doesn't spontaneously catch fire, so on Samsung scale it's good. Reviewers generally don't know about this stuff, because app developers have already worked around it.


Are Samsung devices buggier than other android brands, or just so much more popular within your userbase, that more bugs surface?


My impression as an Android developer (not backed by any systemic research, just personal experience) is that Samsung feels less bound by the platform standards because - due to their large market share - they can simply get away with it.


It's something like 50% of user base and >75% of device-specific bugs.


Bruv. I have a samsung. I'm touched.

We've been together 4 years, its USB port is a little loose now. Too much charging everyday, but the screen is still pristine. The camera is fine so we can go out and take photos at the beach. I'm happy man. It does what it said it would do and did it. And still doing it.


You don't get to see the hacks in the apps you're using, starting with comments like "On some Samsung phones X happens even though API documentation says Y. We work around it by Z" or don't directly pay for the work that went into discovering the workaround.


> Apple still has the most successful single models?

Not just that, every apple device has wireless access, and Apple has thrice the operating income and more than twice the net income of Samsung Electronics.

But yes the "value" of individual devices is also part of the equation, in the sense that Samsung has a lot of cheap-ish devices with fairly short lifetimes, they're not going to fight for device quality. Apple has a very limited number of devices they support for a long time. And they're probably bringing in a lot of baggage from having been screwed over by the plans and fuckups of their suppliers in the past.

And even then, the more time passes the more they just go "fuck'em all" and move chip design in-house, not just the "main" chips (AX, MX, SX) but the ancillary as well: the WX and HX series integrate bluetooth on the SoC. There's no doubt they'll eventually go their own way on larger devices as well, the U1 is probably the first steps towards that.


I didn't realise that Samsung actually sold more phones than Apple (I've just looked at the numbers and you're right). Apple's smaller set of devices does help them a little but there isn't a lot to choose there.

I think my overall point is still valid because I have had Samsung phones for a while and have found their Bluetooth to be pretty good. This is not surprising as Samsung actually bought one of the biggest Bluetooth chip vendors (CSR) at one point, so they do have control over the full stack.


Samsung bought what was advertised as mobile handset business after CSR failed to spot that combo BT/Wifi chips were the future and threw away their lead. Qualcomm got the rest a few years later.

("what was advertised as" because there wasn't really a differentiation in the R&D bits, so there was a somewhat arbitrary split and hasty redacting of repos given to Samsung to avoid names of other customers in comments).

Samsung's phone division and electronic parts division aren't the same thing, so there was no guarantee that the phones would buy the Bluetooth/Wifi from their new acquisition, although I hear they did eventually.


I'd imagine lots of reasons. Apple has the hardware guys to design whatever chip they need. They also have a mountain of reserve capital (Something like $1 trillion I believe?). Further, Apple's closed ecosystem means that if you want them to sell them your hardware you have to conform to their standards.

With Samsung and android, it's a different story. There are many android vendors and the people producing the Bluetooth chips are selling to many of them (broadcom). Samsung has the ability to make their own chips, but to get everything working flawlessly they not only have to make their chips awesome but also improve the android driver stack to work with their new awesome chips. That stack has to also be compatible with the other bluetooth manufactures on the market making it a harder change to make.

In other words, with apple and a vendor, there are pretty much just the 2 parties involved which control everything. With Samsung and vendor it's not just them but also the likes of google and other bluetooth vendors that can get in the way of really fixing things.


I get what you're saying, but Samsung is also huge and Bluetooth is used in lots of electronics that Samsung sells -- including laptops, tablets, smart watches, TVs... or conceivably could sell, like future IoT products.

If someone senior at Samsung said to their vendor "good Bluetooth or you lose the Samsung account", that would provoke some, um, intense conversations at the vendor between sales and engineering.

Incidentally Apple sometimes has more than one vendor too, so it's not just two parties. I know cases where they've had two suppliers. Displays and modems come to mind, although I've not Googled to verify.


> If someone senior at Samsung said to their vendor "good Bluetooth or you lose the Samsung account", that would provoke some, um, intense conversations at the vendor between sales and engineering.

The problem: There aren't that many suppliers left in the field, and Broadcom knows that their customers are pretty much locked in. The notable exception is once again Apple, they have proven that they can and will go and implement the technology on their own if their suppliers fail to meet their expectations.


> go and implement the technology on their own

Samsung could obviously do this too, but as you say, they lack the will - and motivation.


Besides that they also lack the amount of control over the software side. Google is the entity that controls the stack, not Samsung - their responsibility ends at the kernel / HAL interface.

So why should Samsung invest more than the bare minimum when they can't get anything measurable in return?


This is, of course, totally untrue. Samsung changes a ton of things about their flavor of Android, at every level of the stack.

Edit: Samsung has apparently already ripped out the Bluetooth stack and replaced it with their own - https://news.ycombinator.com/item?id=26650447


I get what you are saying but they are simply in different positions. Broadcom can say "Hey, there's a bunch of work here to make this not suck" and Samsung, even if they wanted to do their own thing, realizes they too would have to go through that effort to make everything work. It wouldn't be a simple matter of just making non-sucky bluetooth, they'd have to also work with the android OS to improve the bluetooth stack and no guarantee that those changes can be merged.

Apple controls their whole stack. They've already written the Bluetooth stack for their OS. They only have to service their devices.

> Incidentally Apple sometimes has more than one vendor too, so it's not just two parties.

Not the point I was making. It's not an issue with multiple vendors, its and issue of who controls what. Those other vendors also have to conform to apples standards if they want to sell apple their products. What I'm talking about is the fact that a bluetooth device manufacture has to conform to google's android standards if they want to sell their chips to android manufactures, not to samsung standards. That's where the leverage goes away.

If samsung ever pulls the trigger and uses Tizen everywhere, then they'll be more in Apples position. Until that happens, they need to work with google to get stuff done.


Except that's not how it works? Chip manufacturers just care about selling chips, not about being in the iPhone or being in an Android phone. It doesn't matter that some chip works in OnePlus phones, if it doesn't work in Samsung phones Samsung's not going to buy it.

If broadcom says it's going to be expensive to fix, either you pay or you don't. But it has nothing whatsoever to do with "controlling the whole stack".

As an aside, and it's totally irrelevant to the above, but there's nothing other than the amount of work involved preventing Samsung writing their own bluetooth stack. They write plenty of custom drivers and they created a folding device prior to OS support. If they wanted to they could; they just apparently don't think it's worth the cost.

Edit: according to a comment down thread they have done exactly that.


> If someone senior at Samsung said to their vendor "good Bluetooth or you lose the Samsung account", that would provoke some, um, intense conversations at the vendor between sales and engineering.

The same thing could be said for Qualcomm's failure to support it's SOCs for more than a couple of years. Device makers could force their hand.


> support it's SOCs for more than a couple of years

That works, even works well, as long as device makers feel that supporting SOCs for a limited period helps them sell more phones.

Apple has changed the rules of the game by supporting its devices for longer -- and as phone upgrades become more infrequent due to plateauing technology, I think device makers will realize this.

Samsung has already committed to supporting recent Galaxy devices for at least 4 years -- at least with security updates[1]. I suspect this was also because they found that their extremely short-sighted prior policy re security updates encouraged corporate phone procurers to ditch Samsung and go with Apple.

[1] https://www.theverge.com/2021/2/22/22295639/samsung-galaxy-d...


Any movement on the Android side is helpful and hopefully Samsung can shame Google into following suit.

However, if we're going to count years where you only got a security update, the iPhone 5s is currently on it's 8th supported year.


I wouldn't be surprised if Apple write their own firmware for things like bluetooth chips.


I would be surprised if they didn't. They advertise their own Bluetooth chips H1 and W1 in AirPods and some Beats products. These chips definitely have their own firmware and it would be rather ridiculous not to also have their own firmware on the host side, maybe even re-using code.


Most of them run RTKit, I believe.


There's quite a bit of evidence that they do. Apple's strength has always been at the point at which hardware is built and interfaces to the software, including firmware.


I still find it buggy on my iPhone 12 Pro, as a user at least. Often says 'connected' to my headphones (Sony WH-100XM3's) when it's not, connecting to Alexa devices to stream audio often fails and requires a reboot to connect properly.

I can't believe Bluetooth is still such a pain in the bum in 2021.


Buggy Bluetooth on the other end... Alexa is running Linux, and the Sony is probably some custom bluetooth stack.

Once you move to all Apple bluetooth, things really smooth out. It seems that Apple does way more testing/validating of their Bluetooth stack.


I had the idea in my head (so take with a grain of salt) that parts of the BT stack are underspecified such that different implementations tend to have slightly different interpretations of the standard, so the problem isn't even which vendor you use so much as going all-in on any single vendor so the devices all agree on which reading to use.

Although of course Apple might well be better anyways; one would hope that billions of dollars in R&D plus caring about quality makes a difference.


I've actually worked in this area. You are basically correct.

Sometimes you have to make a choice on which brands/chipsets you support. Devices on different ends of the compatibility spectrum can basically be mutually exclusive. IIRC if you advertise A2DP some devices supporting only HSP won't work, so you can make some hacky workaround but then your nicer A2DP equipment is harder to use. If you only need to guarantee support for X subset of devices you control, it's easy to tweak the settings so they work well together.


Actually the spec is overspecified and repeats itself multiple times in different layers. L2CAP specifies a TTL, so does RFCOMM, so does SOC, so does...


I buy this. All Apple works great, but Apple headset with Windows PC doesn’t seem better than anything else with Windows PC.


They don't smooth out, for me, at least. Airpods Max, for example, can just decide to transmit nothing but deafening static to the other side in the conversation.


Eh, I’m still having weird connection issues between my iPhone and OG AirPods. Less than with my old headphones, but more than I’d like


As a long time Apple user, I really don't know where that's coming from. Endless issues with bluetooth over the years, including full system crashes. Curiously iOS bluetooth stack seems to be more stable than on macOS. Or maybe it's just hardware differences.

Apple's stack does work great with Apples hardware, though.


How many hardware devices are you talking about? That's very different than what I've seen personally or peripheral to enterprise support.


Well, at least Magic Trackpad, Keyboard etc. work really well with a Mac. Never had any issues.

Bose bluetooth headphones, third party "high end" bluetooth devices... not so great. Lately it's been better, though.


They have less hardware variability and also their bluetooth stack is still quite buggy but everyone works around their bugs on device side due to popularity.


Whilst they also use a lot of Broadcom silicon, the stack they use is written 100% in house.


Could it be because they only have a handful of hardware variants to support, vs hundreds on Android? Presumably, the stack itself is somewhat sane on both sides, but the hardware's bugs, poorly specified or undefined behavior is constantly throwing wrenches in the machinery - Apple managed to fix most of them for the hardware they support, but Android has no chance as they have to support hundreds of different chips.


This is not specific to bluetooth or drivers, but I don't fully buy the thesis/deflection that Apple just has a much smaller set of hardware to support. I run a hackintosh system myself and it's more stable than Windows. I'm starting to think Apple just has stronger general QC. Windows is just as glitchy and crashy on Surface devices as anywhere else.

The counter point does apply to drivers, but it's really not just that.


Possibly, but half the guys on the dev/qa team have google pixel devices and even they throw connection errors fairly regularly.


It's having to support every phone that introduces the instability even the refernece devices.


> Any insight into why Apple seems to have a much better bluetooth stack?

This is a joke right? My M1 BT goes out to lunch several times an hour. It is literally unusable.


A wild guess: less hardware variability.


Can you shed some light on what we’re actually looking at here? I see some Rust but most of the actual BT stack looks to be C++ code. Is the HN headline accurate?


Did you miss this on the main page?

> We are missing Rust support in our GN toolchain so we currently build the Rust libraries as a staticlib and link in C++.


Right, but the headline suggests the BlueTooth stack is being rewritten in Rust while the code seems to suggest that the stack is in C++ still. There isn’t much Rust code in the directory relative to all of the C++

4K lines of Rust is not a BlueTooth stack.


It's almost impossible to be worse than the existing Bluewooth stack. IT's the single biggest problem I have with Google phones and it's shameful it took this long to fix.


I wouldn’t assume it’s fixed yet. Even if they’ve managed to write bug free code, you’re still at the mercy of driver supplied by the chip manufacturer.


Broadcom seems to pump out a lot of garbage in general. The SoCs on the raspberry pis are particularly terrible.


Fuchsia network stack is also being rewritten from Go to Rust [1] and it follows a functional core/imperative shell pattern, something very unusual in network stacks [2]:

[1] https://fuchsia.dev/fuchsia-src/contribute/contributing_to_n...

[2] https://cs.opensource.google/fuchsia/fuchsia/+/master:src/co...


Interesting; makes you wonder about the potential for code sharing between that and Android or even linux as well. The Linux kernel developers are open to allowing some Rust usage for things like drivers at this point. So the time seems right for this kind of thing. I guess there is also some decent code from e.g. redox os that might be adapted. Licensing (MIT) should not be an obstacle for this as far as I understand it. Likewise Fuchsia also uses GPL2 compatible licenses (MIT, Apache2, BSD). So, there's some potential to avoid reinventing a few wheels across these platforms.

At face value, protocol stacks such as TCP/IP or bluetooth, are great use cases for a language like rust in a resource constrained environment (battery, CPU) where you are looking to get high throughput, low latency, etc combined with decent zero cost abstractions so you can make some strong guarantees about e.g. correctness. A good high level implementation, might be usable across different OS kernels. Of course these kernels have very different designs so it might make less sense at the code level than I imagine.

I do wonder about where Google is headed with Fuchsia. Do they have a plan at this point for getting OEMs to want to switch to that? I imagine e.g. Samsung might have some hesitations about being even more dependent on Google as a supplier.


> I imagine e.g. Samsung might have some hesitations about being even more dependent on Google as a supplier.

Does it make them more dependent? Google is effectively the only upstream for Android, and Fuchsia is open source, so it seems like it should be the same?


One of them is based on mobile linux; which Samsung also uses for things like Bada and which has a lot of hardware vendors supporting it with drivers; including Samsung itself of course. Bada actually originated out of Nokia's Meego and Maemo mobile OS. That predates Android and early Android versions ran pretty much the same Linux kernels. The first devices running Android were actually Nokia N800s before they did the first Nexus.

Fuchsia is open source but closed source friendly (because of the license). I suspect that's actually the main non technical argument for Google to be doing this: Android is too open and they've been trying to fix that for years. Apple has a similar benefit with the BSD internals of IOS and OSX. Still OSS in part but mostly not and Apple has not bothered with supporting separate Darwin releases for a long time.

So, like with Android, I'd expect a fair amount of closed source secret sauce is going to be needed to run Fuchsia. More rather than less. I doubt Google free versions of Fuchsia are going to be a thing like it is a thing with Android. Google is doing this to take more control of the end user experience. Just like Apple does with IOS. Letting Samsung (or anyone) bastardize that is not really what they want here.

I'm guessing, Samsung actually wants less of that Google secret sauce at this point rather than more. They are trying to differentiate with exclusive features, their own UX, their own apps, and services, etc. I'm expecting a lot of OEMs are going to have a similar mindset. Especially the Chinese ones currently shipping flavors of Android without a Google license for the play services on the wrong side of the current trade wars (like Huawei). Google has got their work cut out there trying to get OEMs like that to swallow Fuchsia. I think, Google is going to end up supporting Android for a long time because of this even if they eventually launch Fuchsia (which in my opinion is not a certainty yet). The simple reason for this is that the alternative would be walking away from a chunk of the mobile market that they currently exploit via their play store. I don't see them surrendering that and I don't think they would want third parties to continue releasing Android without Google either. So, the only way to prevent that would be a long term Android development strategy regardless of whether they actually release Fuchsia or not.

So, reusing code across Android and Fuchsia makes a lot of sense.


Bada doesn't have anything to do with Linux, at all. It was a OS using a stack similar to Symbian.

You mean Tizen.

Which, just like Android, the only thing it shares with Linux, is the Linux kernel, now having its own C++ stack, .NET Core/Xamarin and there are still some Englightment leftovers.


> Fuchsia network stack is also being rewritten from Go to Rust

This Android Bluetooth stack is mostly in C++. Only part of it (about 4K lines) is written with Rust.

The headline is misleading.


This is great news. Hopefully it fixes a lot of the jank that I've always experienced with Android Bluetooth. I don't think I've ever had a smooth experience - even with a supposedly flagship device (Pixel 4a!) I've encountered all of the following problems:

* Devices getting stuck at max volume (thankfully not headphones)

* Devices getting stuck at really low volumes

* Devices randomly switching between absolute volume and relative volume (not really sure how to describe this, but sometimes changing the volume on the phone changes the volume on the receiver, and sometimes it changes the mix volume on the phone only (like an aux output would behave) and keeps the volume on the receiver)

* Needing to enable Developer Settings to change the Bluetooth protocol version and other wacky stuff that I just shouldn't have to do [0]

* Headphones cutting in and out when exercising, like the phone can't figure out it needs to stay in the higher of two radio power profiles that it's switching between, as the receiving antenna on my workout band moves 2-3 inches away from the phone and back again

[0]: https://www.reddit.com/r/GooglePixel/comments/8hbcuu/the_100...

Bluetooth has been awful on Android for a long time. I've never not had to futz with it to get it to work. I hope this is a move toward making it as seamless as it should be. I couldn't imagine trying to figure all this out as a non-technical user.


That's bizarre. I have a Samsung phone with Samsung Bluetooth earbuds and I've only ever experienced a single connection issue in the whole year I've owned them and they are a daily driver for me. I wonder what the difference is.


Samsung phones actually use their own Bluetooth stack not the one from AOSP Android. This is why the Bluetooth feature set sometimes doesn't match with the others.


Well that solves that. It seems much more reliable.


Generally, Samsung's bluetooth implementation has worked better than any other Android I've used. On top of that, Bluetooth seems to work better if both devices are from the same vendor


Samsung bluetooth is different from Android, and it has been much better for many years now. In fact I use Galaxy Buds with my S10 and it is an amazing seamless experience. No lag, no disconnections, great audio quality, and I don't even need to have bluetooth enabled for them to connect, just need to open the buds' case. I don't know how that last part works.


I havet not had any issues with Bluetooth on standard android or Samsung devices.

What I _do_ have problems with is the stupid accompanying apps on Samsung phones (Wear and Buds) that are reinstalled automatically every time I reconnect.

Dear Samsung, how many times do I have to refuse the ToS and delete the app before you get my point?

(I don't use the apps because I don't want any of the "smart" functionality and I don't like Samsung sharing my data with unspecified third parties)


I like the gear app, it has many options to customize the buds sound and gestures. How else do you propose they deliver those features and software updates to users? Keep I mind that Samsung needs to support other Android and ios phones too.


I believe the user should not have to choose between not geting firmware updates and not signing away their personal information.

Let me explain: on the first run the app demands access to lots of personal data including contacts and location data while at the same time stating they may share this data with partners and third parties. If you don't agree to _all_ these the app will not start (but continue to nag you)

If Samsung wants to make the app a requirement, they should remove analytics and also let the user choose if he wants to give the app all these permissions.


same, s10 + galaxy buds+ (also, sony's XM4) and i never had a single issue.


I get these same issues on a Pixel 3a! The most common ones (~daily) are:

- volume stuck low or high (on headphones :/) - "fixed" with a quick bluetooth off/on cycle

- volume okay but not responsive to changes in system volume

I've become used to these issues but I'd love a new driver that made them go away!


All my Bluetooth woes from iPhone finally went away when I got a Pixel last year. The iPhone is still here and being used, but not for exercising due to the connectivity issues.

I think it’s just Bluetooth itself being cursed.


What’s interesting is the iPhone Bluetooth experience with Apple products is excellent. I remember my partner used to have an Android wear watch for her Android phone and it was constantly disconnecting and having to be disconnected/reconnected to fix issues. When I got my Apple Watch, I couldn’t believe how smooth the experience was. To the point where if you didn’t know it ran off Bluetooth you would think it is some new protocol. Similar experience with AirPods as well.


I’ve had some issues with AirPods and AirPods Pro, but they are much rarer than other bluetooth devices I’ve used. I use AirPods Pro several times a day and for hours a day so even with a very low failure rate I’d expect things to break occasionally.

re:Watch, it is likely using WiFi at least some of the time. My Watch shows up on my WiFi and conveniently connects using the same network as my phone, so it apparently knows the WiFi credentials from the phone, at least on a personal network. I never explicitly set that up, and getting Watch to stay off WiFi isn’t something I persisted at - mainly because it does help create a more perfect connection experience than is possible with Bluetooth.

I discovered this initially when I was charging my Watch, while well outside of bluetooth range but just barely in range of WiFi.


> I think it’s just Bluetooth itself being cursed.

Hence my desire for a headphone jack.


Hah, I've experienced the absolute/relative volume bug. Sony headphones sometimes end up connecting with independent (relative?) volume controls and then switching the volume control from independent to absolute when you use a track skip gesture. The volume goes to the max instantly as a result. One of the reasons I'm staying away from wireless Sony headphones.


Yep that happens to me every now and then..


Hopefully they fix the Bluetooth issue with YouTube and Chromecast. It is such a odd issue for me. Chromecast will pause the video because YouTube demands to have the Bluetooth audio back to the app instead of the TV (I have specialized Bluetooth audio transceiver hook up to my TV because of my disability). YT keep forcing the bluetooth audio back to my phone instead of the TV after few second of casting on my TV. This cause the cast to pause the video because my phone detected that audio switched to different source (YouTube). The only way to stop that from happening is to swipe away the YouTube casting notification after casting the video in my phone, then YouTube will stop doing that.

Honestly, I think the issue is the bluetooth itself, it an amazing but extremely fragile technology. My Win 10 laptop crap the bed with the internal bluetooth. And it crap the bed with the USB bluetooth transceiver as well. Microsoft been having issue with bluetooth. The same for OSX, Apple have their issues with it. They recently released a update to fix the bluetooth issue in Big Sur M1 and that didn't fix the issue.


Wow, that's awful. I've had a large number of Pixel phones and bluetooth has been basically flawless (other than an old Pixel 1 where I think the hardware actually failed). I guess I have been lucky with the devices I used with the phones.


Why would changing code from c++ to rust fix those issues? It looks more like bugs in logic rather than language.


Of course I can only speculate, but I see two primary reasons to suspect improvement:

* It's a purposeful re-write: not just because it sounded like a fun idea, but because there are reasons to redo the stack. Memory security a main one given the choice of language, but presumably cleaning up the logic and making it better overall is another.

* Broken-windows theory of software development: Writing in a sloppy, old, foot-gun language encourages bad code that just barely works for the happy path. Writing in a language that is far stricter and requires intention and design makes one think a little more critically about the logic.


Shit like that is why I always wondered where all the claims of Android being superior to iOS came from.


One isn't better than the other. This thread itself is full of people saying the same shit about both android and iOS.


That would be Google's second (at least!) bluetooth stack in Rust, the first one being in Fuchsia.

https://fuchsia.googlesource.com/fuchsia/+/refs/heads/master...


The headline is misleading. Most of this code is C++. They included some Rust, but the core isn’t Rust.


Most of the code is in Rust. More than 75% of the entire stack. One component in the stack, the bt-host driver, is currently still in C++ as the oldest component. This part of the code only deals with low level connection management and speaking HCI with the hardware as well as hosting the GATT implementation, but everything is handed off to other components that are all written in Rust. Parts that remaining component itself of it have already been migrated to rust over time as well including GAP. All of the profiles and upper layers are entirely in Rust as well.


The Android IPC driver (binder) is also being re-written in rust. It takes advantage of the upcoming kernel driver rust support in Linux. It is an obvious choice for a memory safe re-write since all android processes (including sandboxed ones) have access to binder


Hello,

I'm actually one of the main binder userspace maintainers in my day job there (opinions are my own), and I haven't heard about this. Do you have a reference? What has happened is that there is a userspace shim over libbinder called libbinder_rs which provides Rust support for binder, but AFAIK, the kernel driver and main userspace lib is remaining in C++. Still, would be cool.

Your Friend, Steven Moreland


Hi! Not a Google employee, but I'm volunteering on the Rust-for-Linux project which is bringing Rust bindings to the upstream linux kernel. A Google engineer is working on porting binder to Rust and is contributing as he implements it. See his comments here[0] and here[1].

[0] https://github.com/Rust-for-Linux/linux/pull/145 [1] https://github.com/Rust-for-Linux/linux/pull/130


Outch! Thanks for the refs.


Thanks! I searched for this rewritten version and couldn't find anything.


Now we need safer NDK APIs, even C++ bindings would be better than nothing.

Additionally exposing some NDK only APIs to ART would also be welcomed from security point of view.

And since we are at it, support Rust on the NDK LLVM toolchain.


This is a big f'n deal for Android, IMHO


In the "big deal" sense, i'm always curious in what way. Eg is it a source of constant problems? Where not only a rewrite, but specifically a rewrite in Rust, would prevent a lot of issues?

Or is it more of a "What if" thing? Ie there's not many problems currently, but the liability is a huge deal?

to be clear i work in Rust, use it for all my projects, etc - i'm a fanboy, but i also recognize there's a lot of hype. I'm always keeping an eye out for the Rewrite It In Rust (RIIR?) meme vs actual needs.

Which isn't to say that i think people _need_ to have a reason to use Rust, i use it for everything because i (and my team) prefer it - but i think the meme is destructive.. so i'm always looking for it heh.


Binder has been the source of a number of Android security vulnerabilities.


Fun fact: Google's from-scratch Bluetooth stack for Fuchsia has been written primarily written in Rust since it's conception for a few years now.

https://fuchsia.googlesource.com/fuchsia/+/master/src/connec...


It seems like Rust is really catching on.

To be honest, I didn't pay much attention to it for a while -- it felt like it might have simply been that day's "flavor of the day", destined to sink once then next flavor became popular.

Now, there's a real problem to be solved. But I thought a simpler approach would be needed (e.g., Zig or something like it). I guess that may still happen, but seems more and more like Rust is here to stay.


I suspect that Zig will still end up eclipsing Rust - not because it's a "more powerful" language, but because it's much easier to learn. Lower barriers of entry seem to matter more than mere "power" - see Common Lisp's multi-decade feature lead being ignored, and Python leapfrogging other more capable languages (Common Lisp) or equally capable ones that came first (Ruby) due to ease-of-use.


Zig is too uninteresting to eclipse anything. It's the same problem as D's -betterC option. It's not better-enough to motivate switching away from C in places where C is still used, and it's far too stripped down & basic to attract the majority that's gone to C++, Rust, Go, ObjC, or Swift (or even "full" D). It doesn't offer much to justify switching costs. If C++11 had never happened then maybe Zig would attract the C++ crowd, but today?

Zig also seems far too opinionated & even self-contradictory. Like no hidden control flow means that you can't have operator overloading because apparently the + operator is invisible to people or something. And if it could be overridden it could call a function (gasp!) or throw an exception (double gasp!), followed immediately by a big section about how the + operator internally is hidden control flow and can throw an exception (overflow checking).

It then also constantly claims itself as being small & simple, but it has compile-time reflection & evaluation for templated functions - which is widely considered the main complexity of C++. I think Zig is better overall having this feature, I love constexpr & friends in C++, but compile-time code generation & evaluation over generic types is also not "simple" nor "small".


> Zig is too uninteresting to eclipse anything.

As someone who knows C and not Zig, Zig is very interesting. It has incremental compilation, in-place binary patching, the ability to use code before declaration in a file, compile-time code execution, and extremely low compile times. Rust itself doesn't have most of those.

Also, as Python illustrated, a language doesn't have to be interesting to be popular.

As Python also illustrated, a language can be opinionated and popular. I'm growing more and more convinced that useful languages have to be opinionated - opinions have to be somewhere, and if they're not in the language (where the entire community is forced to use them), then they'll have to be in individual code-bases, where you'll get a bunch of fragmentation (which I think was one of the things that killed Lisp).

Now, Zig is very imperfect - no hidden control flow/operator overloading, no REPL, and a weak type system, most notably - but it's better than C in almost every way, easier to learn than Rust, and way more conceptually scalable than FORTH (which has a more "beautiful" design).


> Also, as Python illustrated, a language doesn't have to be interesting to be popular.

Python was very interesting, or rather, Python is the thing in its interesting group that survived. Highly flexible scripting language with a "batteries included" library set is a very compelling sales pitch, even today. It was Perl but readable, and in this case simply being "readable" was interesting enough to cause it to win out (and also Perl's internal drama)

> As Python also illustrated, a language can be opinionated and popular.

Python's "style guide" is opinionated, but Python the language itself isn't that opinionated. Missing ++ & -- are about the only contentious "opinionated" aspects to it. You can fiddle with basically everything else however you want, though, and the language made adjustments to make that even more possible (eg, replacing the print keyword with the overridable print() function).

Critically Python's standard library didn't really get any special treatment from the language. When a language lets the standard library or builtins do things that you can't, that's when it gets really questionable.


"Perl but readable" and "batteries included" are true but leave out some equally important things: The explicit reference / dereference business in Perl makes handling nested data structures pretty annoying and strongly guides programmers to avoid those, which has a lot of compounding effects to structure of prgorams. Also Python embraced multiplatform early on, and had eg excellent Windows support, whereas Perl was always Unix first. Also the culture is important. Python was always beginner-friendly whereas Perl people were less likely to suffer fools gladly.


> It has incremental compilation, in-place binary patching, the ability to use code before declaration in a file, compile-time code execution, and extremely low compile times. Rust itself doesn't have most of those.

Rust has all of those except in place binary patching and fast compile times.


Rust's "compile-time code execution" is a horrendously complex joke, and it's compile times are so atrociously long that the lack of those two things you mentioned is even worse than in a language like C (with somewhat-sane compilation speed).


> no REPL

Personally I don't find it necessary, but the proposal for REPL has been accepted: https://github.com/ziglang/zig/issues/596


A good example of what happens when the language is not sufficiently opinionated is the modern-day JavaScript/Node.js ecosystem.


Incremental compilation, edit-and-continue are available in Visual C and C++.

REPLs do exist for C and C++.

Zig security story is hardly much better than using something like Free Pascal, with even less libraries.


> Incremental compilation, edit-and-continue are available in Visual C and C++.

In a particular toolchain not available cross-platform - not comparable to Zig having it available in the reference implementation, which is open-source and cross-platform.

> REPLs do exist for C and C++.

Hacky, nonstandard ones with limitations and altered semantics that aren't included in any of the major IDE's. Not remotely comparable to what's provided with SLIME.


> It's the same problem as D's -betterC option. It's not better-enough to motivate switching away from C in places where C is still used, and it's far too stripped down & basic to attract the majority that's gone to C++, Rust, Go, ObjC, or Swift (or even "full" D).

I've been interested in D since the early days (back when it had two competing standard libraries) - I think you're kind of misrepresenting why D never caught on - it wasn't that people weren't interested in a better C++ - it's that D was unsuitable for a lot of scenarios C++ was used because they decided to include GC in the language and it needed to have a runtime that supports it. This put D more in the C# alternative camp than C++ alternative, it was harder to port (I don't know if D still has a working iOS or Android ports, it didn't have them a few years ago when I last checked). And as a C# alternative it's strait out worse across the board (C# has quite string native interop so D doesn't really win much, tooling ecosystem and support is levels above).

If someone came up with something ala D (strong C/C++ interop, fast compile times, good metaprogramming, modules, package manager) without the GC and LLVM based (so it's super portable out of the box) I'm sure it would gain traction. The problem is that's a huge undertaking and I don't see who would be motivated to fund such a thing.

Rust exists because it solves a real problem and places like this BT stack seem like perfect use case due to security aspects - but the security aspect also adds a lot of complexity and there are domains that really don't care about it - they just want sane modules, modern package management and fast compile times.

Go has it's own niche in the application/network programming.

Seems like modern system languages get purpose built and general purpose ones are higher up the stack.


> If someone came up with something ala D (strong C/C++ interop, fast compile times, good metaprogramming, modules, package manager) without the GC and LLVM based (so it's super portable out of the box) I'm sure it would gain traction.

D themselves did that, that's what I was referring to: https://dlang.org/spec/betterc.html


This happened recently (in D timeframe) and is still a hack that basically creates a different language that's a subset of D - which is the dirtiest way to fix an initial design mistake. If D came out as betterc from the go I guarantee you the adoption would have been much better - this way you're stuck with legacy decisions of a language that was never really that popular to begin with (unlike C - C++ story).

Also back when it launched LLVM wasn't really a thing (GCC was probably the closest thing to having a portable backend but it wasn't nearly as clear cut especially with the licensing and all that anti-extensibility attitude), and D having it's own custom backend was also an issue.

I applaud the effort but at this point I think it will never get mainstream popularity like Rust. I'm sure it's useful to people but it had it's time in the spotlight, I don't see how they get the momentum shift.


> It's not better-enough to motivate switching away from C in places where C is still used

I suspect zig will eventually fill this niche. Proper arrays and strings and better compile time execution support while still being a small, simple, explicit language are quite significant improvements on C.


But why wouldn't I use Rust in those scenarios, which gives me even more safety & compile-time bug checking? Zig isn't actually a small or simple language, after all, that's just marketing bullshit. Manual, mostly unassisted memory management isn't simple, and compile time evaluation & reflection isn't small. It keeps making claims about being simpler because it doesn't have macros or metaprogramming, but that's incredibly deceptive at best since it does have compile-time code generation (aka, metaprogramming). It's how Zig's printf function works! https://ziglang.org/documentation/0.7.1/#Case-Study-printf-i...

And as a bonus Rust includes a package manager today, instead of a coming eventually promise. So I'm not seeing why I'd ever go with Zig over Rust if I was migrating off of C today?


> Zig isn't actually a small or simple language, after all, that's just marketing bullshit. Manual, mostly unassisted memory management isn't simple, and compile time evaluation & reflection isn't small. It keeps making claims about being simpler because it doesn't have macros or metaprogramming, but that's incredibly deceptive at best since it does have compile-time code generation (aka, metaprogramming).

As VP of marketing bullshit I recommend you double check your source of information as we never claimed that Zig has no metaprogramming, in fact comptime is mentioned on the front page of the official Zig website and the codesample on the right of it shows comptime metaprogramming too.

That said, is you need something stable today, then Rust is undoubtedly the better choice.


It think zig makes quite a bit of sense for micro-controllers where you need really precise control of things, avoid lots of bugs simply by having all memory statically allocated, and you need to do a lot of unsafe bit twiddling anyway to interact with hardware.


Zig won't dis-allow any correct programs, zig would compile faster, and zig does appear to be simpler even though in principle you could come across specific zig programs that aren't simple due to doing crazy metaprogramming.

If I imagine a world where Zig is 1.0, and has the same tooling/ecosystem as Rust, and I want to make a single player game from scratch, I would probably pick Zig over Rust, and Zig over C or C++.


I think Zig is really interesting and has a lot of great ideas, some of which I hope Rust takes inspiration from. They've also done some difficult engineering work to make parts of the development experience feel amazing, like bundling clang and libc implementations for a variety of platforms so that cross-compiling works out-of-the-box. The Zig cross-compiling story is the best I've ever seen of any language, bar none.

That being said, I do think Rust's memory safety story is a game-changer for systems programming. We seem to be in a programming language boom, with lots of new and interesting languages being developed. I hope some of them iterate on the concepts that Rust has developed so we can do even better in the future! I don't think anyone involved in Rust would claim that it's the best we can do, or it solves every problem perfectly.


Zigs lack of operator overloading means math code will be illegibly gibberish, which kills any possible interest I had in it.


Why would you need operator overloading in a low-level language? It's not like you are gonna write Jupyter notebooks with it. Zig is not meant to replace MATLAB, Python or Julia. I would use it instead to build the backbone of such a data science platform.


> Why would you need operator overloading in a low-level language?

Have you ever heard of matrices and vectors? You need them in a lot of DSP (digital signal processing) applications, like audio and video filters, or in 3D graphics. Being able to write

    x = m * y
instead of

    x = m.vector_mult(y)
makes life so much more pleasant.


In Zig you write libraries that should be explicit and maintainable.


    x = m * y
is even more explicit than

    x = m.vector_mult(y)
because the * operator is side-effect free by convention, while a method like vector_mult() might or might not be mutating (i.e., it could work like the *= assignment operator).


I use operator overloading in C++ all the time, you seem to be looking at it from a data science perspective, but other fields use math also, such as gamedev.


I would have imagined that in game dev you prototype first with a high level language and then you would write the math code. How much of game dev is actually math code, aside from critical components like rendering, physics, etc.?


No that is not how it is done, the math is written directly in C++.

If you have fast iterative build times, lots of assertions, and good debugger there is no reason to waste time writing it in another language first.

Almost every system uses basic vectors and matrices/quats, most gameplay code also uses them etc.


Consider the fact that QuakeC had a native vector datatype back in the day, despite the fact that all rendering and physics was done in native C code.


3D rotation is maths. How many 3D game objects don't rotate anything in gameplay?


How is adding `Duration` to `DateTime` related to jupyter notebook? I use that much more often than element-wise matrix multiplication.


I share your view partially. As you said, code generation is difficult and it can get messy pretty fast, but this is the beauty of Zig, you have these features tightly packed, there is a capped number of ways for doing things. In Zig you learn how to use the wand properly, instead of being distracted by trying different sizes and colors of wands. While many languages focus on the breadth of ways of doing things, Zig is all about the depth of a small set.


Sounds a lot like Go. For me, that was a huge problem with Go.

I can only speak for Go, not Zig at all, but not giving me the "wand colors" meant that when i still needed colors to solve my problems, i had to invent them myself. .. okay this analogy breaks down there, but yea. The need for basic things like iterators, enums, and sometimes even generics didn't go away in Go. They didn't stop being extremely useful patterns or abstractions. They're just missing.

So what do you do with something that is still useful, possibly needed, but missing? You reinvent it. Very basic behavior like Iterators, Maps, etc become separated by piles of functions spread out all over the page. Yea, it's all simple - no complex features, but also no way to express that logic tightly, quick to reason about. Go wears down your scroll wheel in my experience (~5 years).

Would i have the same complaints about Zig? Your comment leaves me feeling like i would.


> So what do you do with something that is still useful, possibly needed, but missing? You reinvent it.

Yup, see for example java.lang.Comparable which is basically just the standard library going "yeah the language screwed up, here's your operator overloading"


I've been using Python since it was version 1.4.

Back then the main draw of Python was supposed to be the "one way of doing things" - Python originally started as a teaching language.

And look at Python today - there's what, five different "standard" ways of packaging libraries? (And why is "packaging libraries" even a thing?) Instead of "batteries included" we get at least four different ways of doing every common task: the stdlib Python 2 way, the stdlib Python 3 way, the "standard" community third-party synchronous library and the "standard" community async one.

This is just how it always is. Every language starts with the goal of being small, easy to understand and beautifully composeable.

The cruft builds over time because people eventually want it and because none of it ever really goes away due to backwards compatibility.

I think it's best to make peace with this fact, learn to live with the cruft and accept existing languages rather than switching your entire stack every three years trying to chase an unobtainable dragon.


I think Zig could end up being much more popular for domains where Rust's safety and correctness features are less important. Something like a bluetooth stack is exactly where Rust is ideal though imho. Some for crypto libs.


I believe the number of domains which benefit from Rust’s safety features combined with its runtime performance are vast. IoT, hardware drivers, autonomous systems, embedded systems, (cars, drones) infrastructure, etc. I would choose Rust in these scenarios if I have choice.

I understand that the language has a steeper learning curve but it’s an upfront cost compared to C (or Zig?) where you have to put even more effort later on chasing the same bugs which Rust could’ve protected you from.

I don’t know Zig well enough so I’m not arguing against it. It’s just what I think about being safe vs being easier to learn.

But I see your point. At the end of the day, the growth of the language happens almost organically and might not follow the logic I put forward.


I think of Zig as a quarter step between C and Rust. You don't get memory safety, but you DO get better handling of undefined behavior, and option types, and better protection from array bounds overruns.

So like a single player video game, it might be an easier overall choice, in a hypothetical world where ecosystems are similarly fleshed out.


Common Lisp had itself and its community to blame. Their dependency managers were decades behind other scripting languages. Poor defaults for build tooling. Even now building a statically-linked binary is an exercise in frustration.

Despite being an ANSI specified language, the most popular libraries focuses only on SBCL. CL is like Lua where most interpreters never achieve 100% compatibility with each other. The lack of Emacs/Vim alternatives demands beginners to adhere to their dogma. Adhering to their cargo cult might be reasonable if they are the dominant language and culture. But they are not. Software engineering classes in university teaches how to make Java AbstractSingletonProxyFactoryBeans first and caml/lisp much later. Common lisp had it coming when they had their lunch eaten by Clojure. They were an old irrelevant relic that sat on their ivory tower and refused to improve or confront the status quo beyond empty words.

And it is not like the Common Lisp community lacks resources. They claim their language is used at Google via ITA software and in GOFAI through Grammarly. Even the bleeding edge of computing through Regetti. Then where are all the maintained, up to date tooling and libraries? Do a quick search and almost everything is unmaintained or half dead, with the usual generic excuse being "we are ANSI specified, libraries twenty years ago will work perfectly fine".

HN is an echo chamber for the greatness of Lisp where every commenter would worship at its church before going back to coding JS/Python/Java on Monday.


"We were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp."

- Guy Steele, Java spec co-author


You forgot Scheme.


Rust isn't that hard to learn. I teach it to college sophomores who only know Java, and within 2-3 months I have them writing parsers and interpreters in Rust. In fact these students are requesting our courses to be taught in Rust, and have never heard of Zig.

I think that while the language is a little complicated, this is tempered by how nice the tooling is. I consider the borrow checker to be my TA, as it actually helps the student write code that is structured better. When they go on to write C and C++ in later courses, their code is actually more memory safe due to having their habits having been shaped by Rust.


> this is tempered by how nice the tooling is.

It's fairly mixed. Compiler warnings and errors are great. IDE integration is improving. CPU profiling with perf/hotspot is fine, albeit memory-hungry. Debugging and memory profiling is still bad compared to java.


Heh, figures that the only person with actual experience systematically teaching the language, is downvoted here.


I'd imagine given another language (e.g. Python), they'd learn a lot faster and be a lot more prodictive throughout the course.


It's a systems course, so we use a language targeted at writing systems. It used to be C/C++, now it's Rust.


That's nice, at UPB in Bucharest we use for the OS course C and for the distributed systems one Go/Java, though the DS one involves heavy theory/math and less coding but implementations in Go/Java are a big bonus. I don't know where do you teach, but our students would probably be confused by Rust and their time will be spent elsewhere instead of being focused on the actual course content.


Possibly but Python is a completely different level of programming too.

I do wonder if Rust is easier or harder than other comparable languages like C / C++ when the person has no prior knowledge of programming.

I would say just the ease of having a hello world and the ease of the Rust book would make it easier to get to grips with. No dealing with complex build systems and compiler flags at the start


> I do wonder if Rust is easier or harder than other comparable languages like C / C++ when the person has no prior knowledge of programming.

I can answer this because I previously taught the course in C and C++. The students supposedly have some knowledge of Java but they seem to always have forgotten all of it when they reach me.

Students learning C use all of the footguns you can imagine. The biggest problem for them is writing code that segfaults, and their code always segfaults. It's not so much a problem that it does, but that there is 0 useful feedback as to why the segfault occurred and where they could potentially look to fix the problem. The bigger issue for them is memory management. They frequently walk into the use after free, double free, and memory leak walls. They also have significant trouble with N-dimensional pointer arrays. They have trouble allocating the appropriate memory for these arrays and can't seem to visualize the layout of memory.

C++ is a little better because they have experience with Java, but they are frequently mystified by constructors/destructors and how to manage memory manually (they only have experience with a GC), and template programming is always an issue. But they still run into the same issue with segfaults.

Rust makes all of this go away. They don't have to manage their memory manually, and they don't encounter a single segfault. Ever. We are able to just sweep all of those problems under the rug, and move on to actual content. With C/C++ we focused a lot on the language with the hope we could use it as a tool eventually, with Rust we use the language as a tool and focus on the wider world of applications for which that tool is useful.


NOT TRUE!

You are doing it wrong If you are manually managing memory on Modern C++.

C++ has RAII built in even before Rust came to the scene.

I've never had a situation where I had to manage memory manually during my professional career in C++

If your students face seg faults, please show them Valgrind. Valgrind is better than GDB when it comes to seg faults. It can show you where exactly error occurred.

Edit: Oh I got downvoted for saying truth. I now know what they are really up to.


You still need to teach what the delete keyword is and how memory management works in C++. There's tons of code out there not written with smart pointers.

And yes we use valgrind, but it's important not to overload students who are new to programming. Students tend to reach for withdrawal forms when you tell them "I see you are overwhelmed by this new tool you're learning. To solve this problem here's a new tool to learn". The near-universal sentiment I've gotten from students going from C/C++ to Rust is "Wow, this is so much nicer". YMMV.


"new" and "delete" considered as bad practices.

You claimed that you don't have to teach Manual Memory management in Rust so why not do the same for C++?

"Rust makes all of this go away. They don't have to manage their memory manually"

Both C++ and Rust has manual memory management but rarely needed. Both supports RAII where everything managed automatically.


> "new" and "delete" considered as bad practices.

Depending on what kind of work you do, you may not have a choice as to whether you get to you smart pointers or not. Our students go into work with decades-old codebases littered with new and delete. They need to know what they do.

> You claimed that you don't have to teach Manual Memory management in Rust so why not do the same for C++?

Because there are no old rust codebases where manual memory management is an issue.


I haven't seen a 'new' or 'delete' in C++ in 15 years. (And I have worked on teams writing custom malloc implementations and lots of other low-level things.)

I guess there are lots of demented and broken codebases out there, but starting students with the broken dysfunctional case as the default seems wrong. They can learn bad habits on their own later.


I have worked with decades old codebases. I use up-to-date compiler where you get to use smart pointers and all the Modern C++ features. It's your fault if you not teach them how to upgrade the compiler. There is zero risk when compiling C++98 with C++11.

It's also student fault if he selects a company with C++98 codebase. The student shall do due diligence.

It's my advice for you to teach the Modern C++ before driving into legacy C++ sort of like how you teach History. You start what's current then drive deep under.

It builds intuition.


Ok. I'll take your advice and give it the consideration which it's due. Thanks for taking the time out of your day to create a throwaway account just to tell me how to do my job. Cheers.


Unfortunately, the C ++ Committee knows that universities do a terrible job teaching C ++, except the one Bjarne teaches. If you ask Bjarne, he will tell you the same things I said.

A lot of universities stuck in C++98. It will takes them 1000 years to adopt C++20 tho.

One of mistakes many universities do is teaching either C or Java before C++.

I find C++ as a minimal OOP language to get head start.

C++ -> Java -> C

But if student is a complete noob to programming, I'd suggest to learn Javascript before going to C++

Again, this is to keep the intuition otherwise they will forget what they learnt.

I wish if many universities listened to Kate Gregory talk.

When you teach Java before C++,

Students try to write C++ in Java Style (Eg with new and delete). I answer plenty questions of students on Reddit's cpp sub who are trying to write C++ in Java style. I might make a talk on this in CPPCon.

When you teach C before C++,

Students try to write C with Classes.

But, as once Steve Jobs said,

"I used to think that technology could help education. I’ve probably spearheaded giving away more computer equipment to schools than anybody else on the planet. But I’ve had to come to the inevitable conclusion that the problem is not one that technology can hope to solve. What’s wrong with education cannot be fixed with technology. No amount of technology will make a dent."

You can laugh at me, that's okay. But truth is truth!

By the way, This is my 50th throwaway account. I use them because I'm getting down voted to death when speaking in defense of C++.


Depends on the universitiy.

I was blessed that my own university during the early 90's, already teached C++ after Pascal to first year students, C was never taught explicily and naturally Java was like 6 years away to be a reality.

The same univerity does teach modern C++ nowadays.


Have you coded on larger code bases not written by you? Usually most of the time you read code instead of writing it. Sure you can add new code that adds smart pointers, but you still have to understand the exant code, at least to some degree.


I've done plenty contractual work on larger code bases that has C++98 to some extent. I will tell you what you will never truly understand the whole code base.

Sourcetrail does amazing job at explaining larger code bases. Check it out.


My experience with C++ is that every time I look at it every few years, C++ developers tell me that I'm stupid for wanting to do memory management the way it was done a few years ago and nobody has ever done that.


>It's also student fault if he selects a company with C++98 codebase. The student shall do due diligence.

So you are saying they should (as totally newbies) ask to go through a company's code before accepting a job?


No. A lot of job posts out there that highlights the language version. For example, I saw a lot of job posts for C++11, C++17 so on. Heck even, some jobs directly mentions Modern C++.

https://jobs.smartrecruiters.com/FireEyeInc1/743999738511776...

They can also look at the creation date of company and make a educated guess.


> I've never had a situation where I had to manage memory manually during my professional career in C++

I don't know what kind of career you had but it's nothing like any C++ career I've ever heard of. Do you never allocate integers or other basic types on the heap? Do you never deal with C-strings? Do you never interact with C libraries? Do you never call posix functions?

A C++ career is full of memory management. You got downvoted because it appears you're speaking from a lack of experience. Even if what you're saying is the truth, I would say you fell into a lucky niche that is unusual compared to the average C++ programmer.


I believe that statement from the person you're replying to was in comparison to the issues with teaching C not C++


No. See what he said,

"C++ is a little better because they have experience with Java, but they are frequently mystified by constructors/destructors and how to manage memory manually"


The Rust book as currently written assumes that the user has some programming experience. That's mainly b/c there hasn't been much of any experimentation in teaching Rust as a first time programming language. Although obviously it's been done for C++ and even C, so it ought to be quite doable, if perhaps with a bit of a "learn programming the hard way" style.


That's true. It doesn't introduce base concepts for example.


Python is way too ad-hoc to be a good teaching language in a professional context. It's designed to teach kids and first-time coders, as an alternative to BASIC.


I think that wholly depends on the context. Python is a great teaching language IMHO if the goal is to teach how to get things done. It's not a good language if you're teaching how things do work under the hood.

To me it's the difference between programming and computer science.


I don't think so, too many heavy hitters are adopting rust, and rust has the advantage in adoption across the board. To beat rust you have to be at least 2-10X as desirable as rust.


Only if they fix the language to cover use after free errors.


I don't really consider the popularity of a thing on HN as a good measure of its real-world popularity.

Take a look at the GitHub Octoverse https://octoverse.github.com/


Dlang has better metaprogramming capabilities and compiles extremely fast. https://forum.dlang.org


Zig isn’t quite production ready.


And it doesn't have memory safety. Zig is really fun and it is excellent for small wasm modules, but until it gets memory safety it will never be a Rust alternative.


More on the missing memory safety of Zig: https://scattered-thoughts.net/writing/how-safe-is-zig/

Discussion from 9 days ago: https://news.ycombinator.com/item?id=26537693


Thanks for the links. I donate a pittance to Zig, I really want to see where CTFE goes. If nothing else, Zig pushes the state of the art and every time a new system does, it sets the lower bound (hopefully) for the future.


Safety is a matter of degree, not an absolute. Unsafe Rust isn't safe. Not to mention, safe Zig isn't unsafe. And, if you want a lot of memory safety -- more than Rust in some ways -- you can use something like Javascript.


C++ doesn't have "memory safety" and it is a Rust alternative. Stop being pretentious, not every language has to have Rust's borrow checker or whatever its called.


C backwards compatibility doesn't help, but it definilty does have the opt-in tooling to make it way safer than using C, although it isn't Ada safe IDE improvements for live static analysis do help.


But C++ does have opt-in memory safety via smart pointers and RAII. It's not preventing all the bug types hat rust will, but it is. There is also tons of code written without it, but you get the idea.

Zig misses a lot of futures that modern C++ has without offering anything in return. Yeah, it's easy to learn compared to C++ and Rust, but what the point of learning it if doesn't offer anything new?


I agree, but nothing short of borrow checker is considered memory safety to Rust advocates and they probably think that we segfault at least 10 times an hour with use-after-free's, double-free's and buffer overflows.

I don't think Zig is meant to replace C++. It's a cool language on its own.

By the way, you've been shadowbanned if you haven't noticed it yet.


I'm always curious why bluetooth is such a terrible piece of technology. Did anyone write blogposts analyzing what went wrong?


Enormous spec crafted by committee. It's anything and everything to everyone, which means implementing the entire thing is a _serious_ undertaking. Add to that it's the kind of feature that's a cost center--people expect wireless stuff like headphones, etc. to just work and be there, it's not a glitzy feature that sells products. So there's zero incentive to innovate or spend time and money making it better. Hardware makers want cheaper chips, skimping on the implementation and software side helps make that happen.


From what I know from having worked at Nokia, it's simply the result of design by committee where the committee was made up of mutually very hostile hardware (mostly) companies not particularly good at software (that's how Nokia failed in a nutshell), telcos, chipset manufacturers, hardware manufacturers, etc. And I mean hostile as in looking to squeeze each other for patent licensing, competing for the same customers, and suppliers. All this happened in an ecosystem that also produced such monstrosities as gprs, 3G, 4G, etc. Ericsson and Nokia were on top of the market when they created bluetooth and had a relatively big vote in these committees.

Each of those standards were burdened with lots of little features to cater for the needs (perceived or real) of each of the committee members. It's a very similar dynamic to what happened to bluetooth. A lot of 3G stuff never really got any traction. Especially once Apple and Google decided that IP connectivity was the only thing they needed from 3G/4G modems and unceremoniously declined to even bother to support such things as videocalls over 3G. Apple did Facetime instead and in the process also strangled SMS and cut out the middlemen (operators) from the revenue. Google was a bit slower but on Android a lot of 3G features were never really implemented as they were mostly redundant if you had a working internet connection, fully featured SDKs, and a modern web browser.

It's the same for a lot of early bluetooth features. Lots of stuff you can do with it; lots of vendors with half broken implementations with lots of bugs; decades of workarounds for all sorts of widely used buggy implementations; etc. It kind of works but not great and making it work great in the presence of all those not so great products is kind of hard.

Just a long way of saying that bluetooth is so convoluted and complicated is because the people that built it needed it to be that way more badly than they needed for it to be easy to implement (including by others). At this point it's so entrenched that nothing else seems to be able to displace it. I'm not even sure of people actively putting time and resources in even trying to do that. I guess you could but your product just wouldn't work with any phone or laptop in the market. Which if you make e.g. headphones is kind of a non-starter. It's Bluetooth or nothing. I wouldn't be surprised if Apple considered this at some point and then ended up not doing it. They have a history of taking good ideas and then creating proprietary but functional implementations of those ideas.


> pple did Facetime instead and in the process also strangled SMS and cut out the middlemen

I'm personally really glad for this decision. iMessage is many times better than SMS. SMS security is a nightmare by design.

I just wish there was something better between Google and Apple, like a universal iMessage.


I really wish Apple would release iMessage for Android. I know they never will, and I know I can just use SMS / MMS / RCS or WhatsApp or Signal or whatever. I'm just really tired of the default app for iOS not being able to interop with Android. Every time someone from my wife's family sends (iMessage) her a video and its low quality on her phone (pixel) and she asks them to email it and they say "What is wrong with your phone" a little piece of me dies inside.


I think this is a pretty interesting summary: https://www.reddit.com/r/NoStupidQuestions/comments/mc13t4/c...


Some of it is the constraints with the main use cases: low power, (relatively) high bandwidth, the need to pair devices without screens. Comparing this use case to something like wifi is unfair.

A lot of it is due to backwards compatibility. Bluetooth isn't simply bluetooth. There are different versions, different profiles, different codecs, and even different optional features.

Have a look at the matrix: https://www.rtings.com/headphones/learn/bluetooth-versions-c...

The two devices being paired have to figure out what version/profile/codec to use to talk to each other, and gracefully fall back to the lowest mutually supported featureset. This is a really hard problem, and the devices don't always handle it well.


It's pretty simple really. It tries to do a lot of complicated things, and it's underspecified so it's possible to have compliant devices that behave wildly differently between vendors. It also rarely breaks compatibility so there's a lot of cruft for ancient devices.


This is the exact reason it is challenging. There is a lot of variance in how each manufacturer implements it. You might compare it to the browser wars of old - you've got many different players adding various extensions in different ways etc. The actual radio technology and low power consumption are phenomenal, the problems arise mostly around protocol implementation oddities.


Any analysis would be less than insightful unless it comes from someone who sat in the meetings and helped write the standard.

FWIW Bluetooth isn't "terrible." It's pretty remarkable we can get all sorts of devices to communicate with eachother wirelessly and at low powers with pretty decent bandwidth. And now you can buy a Bluetooth stack on a chip from a variety of vendors.

The bigger issue with Bluetooth is that failure conditions are mostly an afterthought by device manufacturers, and Bluetooth is becoming a sought after feature in environments less than tolerable to failure like automotive and medical devices.


If you go into the parent directory, what appears to be the main Gabeldorsh directory, most of the implementation appears to be written in C++. Is the project being slowly rewritten in Rust? Or are only parts of it written in Rust?


I’m not finding much Rust BlueTooth Stack code in the link. There are only about 4K lines of Rust in here.

Am I missing something? Or is the headline exaggerated?


The headline is technically correct. "rewrite with Rust" means there is Rust inside, but it doesn't mean that it's all (or mostly) Rust. But given the way one commonly understands the headline, it's definitely misleading.


The parent directory is the Android supporting stuff like the bluetooth HAL and the JNI interfaces. The rest is the old bluedroid/floride stuff.


Gah. Just two days ago I finished what must have been our fourth rewrite of Kotlin code to interface with Android BLE. It's a complete trial by fire experience. Various medium and other blogs on the net are a morass of "find the true information amongst the disinformation". If I had a nickel for every "just put a delay here" so called solutions.

We have two apps, one that communicates over many variously configured characteristics, and another that uses less characteristics but pushes/receives just as fast as I can get it to go in big bursts.

The edge cases around connect/disconnect events are the most frustrating and most difficult to reliably debug and gain confidence your implementation is robust. Oh, and don't forget that just because it works on your Samsung, doesn't mean it works on your Moto.

Assuming this new implementation is indeed much better (and not just swapping one pile of surprises for a new and shiny, but different, pile of surprises) my hat is off to the folks behind this. You get a big fat atta-whatever for making the world a better place, even if I wish it had happened 4 years ago.


> Oh, and don't forget that just because it works on your Samsung, doesn't mean it works on your Moto.

And just because it works on a new Samsung doesn't mean it works on a 2 generations old one. I had to do two projects recently developing cross platform mobile apps, one had to interface with the WIFI stack - holly shit the deprecated APIs that only work on Android 10, legacy that doesn't work but is the only way to do it on Android <10, cross device inconsistency, incorrect documentation (one thing in the docs, another in the source code) etc. etc.

To be fair, iOS doesn't expose a lot of that functionality to user space apps (without special certs) but I prefer that to Android where it's technically possible but practically impossible because of the insane support matrix - it just wastes time.

I'm not doing any mobile development from now on - the entire process is just riddled with busywork and working around crap APIs, people used to complain about having to support IE, mobile fragmentation is probably 10x worse.


I went through the same thing recently. The best reference I've ever seen on Android Bluetooth is this series of posts:

https://medium.com/@martijn.van.welie/making-android-ble-wor...


Wait , so can rust generally be replace C++ code in most projects ?

Has anyone here had success with a partial to Rust migration.


>Wait , so can rust generally be replace C++ code in most projects ?

In principle Rust could replace every line of C++ code in the world. The questions of how often it would be a good idea to do so, practical to do so, is harder to say. It is promising that this bluetooth stack only needed 4 lines of unsafe though!

Since the interop is zero overhead doing piecewise migrations is certainly possible, as has been going on with firefox, and curl and discussions of doing it in linux as well. You do complicate your build system and there is a non trivial amount of work to stitch the two languages together.


I thought curl was expressly not migrating any code to Rust?



Yes, but more recently there has been a change of heart. I don't think it implies for now that there is any aim to replace all of it with Rust though.


I think the answer is yes. Most C++ projects would be better served by Rust. However there are many caveats:

- I think that C++ devs are still more numerous than Rust devs.

- There are many excellent C++ libraries that don't yet have great Rust bindings. Furthermore it is unlikely that template-heavy libraries will ever be easy to use from Rust.

- C++ is supported on more platforms.

- C++ is more powerful. (Particularly templates). You rarely need more power than what is available in Rust but if for whatever reason your project would really benefit from heavy meta-programming C++ will be better. (I think this case is rare). Rust is also catching up, but the language development, especially around generics is fairly slow (which is probably a good thing)


Meta-programming needs in Rust are addressed by macros. There's not really anything missing there compared to what C++ does via templates.


IMHO, you're a bit too far in the other direction; there are areas where we've been working on actively improving, and there are still some things that C++ folks really miss in Rust. Doesn't mean we'll add all of them, of course, but there is desire for more, for good reasons.


There are generics and dependent types. But these are still gaining features and not nearly as powerful as templates and SFINAE. (but way easier to understand)

Macros are an option but don't have access to the same type information so often they solve different problems.


> Has anyone here had success with a partial to Rust migration.

Firefox have.


It is very hard to answer such a general question with a definitive answer, but Rust does want to be viable for the same sorts of things in which C++ is viable. As always, your mileage may vary.


So to clear I can't just plop a Rust class into a C++ project.


You can't. However there are options.

- Rust has strong support for C ABI much like C++. So you can communicate between Rust and C++ via a C ABI.

- There are projects like https://cxx.rs/ to provide higher-level bindings between the two languages.

However I suspect that template-heavy/generic-heavy code will never be well supported. This is usually not an issue for the types of things that we are trying to bind.


Rust does not have classes, strictly speaking, though you can define methods on structs.

https://crates.io/crates/cxx is the simplest way to do an integration. It is slightly more work than "just plop in" but it's not incredibly difficult. It's harder than mixing C and C++ together, but then again, almost no pairings of languages are that easy.


Then C++ doesn't have classes either, you can put methods on structs though.

Rust people keep saying there are not classes, but all a class needs it the ability to put methods on structs. Private access to some of the internals is often useful, but doesn't need to be enforced by the compiler.


The issue is not "structs" v. "classes" per se, it's things like inheritance, vtables and RTTI (also other C++ features like templates and exceptions), that need special ABI support in C++ for which there is no Rust-side equivalent. (meanwhile Rust traits are quite different from anything in C++, although they're used similarly)


None of those things are required for a class. I'll admit they are all useful at time, but all are abused.


It largely depends on the definition of "class" you're using. You'll raise some eyebrows calling structs that can't support implementation inheritance "classes".

You can also have implement different associated functions based on properties of generic arguments, which is quite different in design from just attaching methods to a struct.


I mean, C++ has the "class" and "struct" keywords for a reason. (they are very similar, Rust structs are closer to "struct" than "class" though) There are a lot more things going on with C++ classes than syntax sugar for functions that have access to "this."

Also, while not in C++, in many languages, classes imply heap allocation, where structs do not.


What's the difference between struct and class in C++? IIRC the standard says they're the same except that members are public by default on struct and private by default on class. You could as well argue that C++ doesn't have structs.

The different allocation between structs and class objects in C# is a total head scratcher. Didn't it ever occur to the language designers that someone might want to choose how to allocate memory?


So, I dug into this a bit more, and you're right!

https://www.fluentcpp.com/2017/06/13/the-real-difference-bet...

Maybe it was the era that I learned C++ (which was in the 98 days), but I was taught something closer to this convention, and didn't realize until just now there was such little difference by the book. Just bundling some data together? Use a struct. Doing more? Use a class.

I still think this distinction is useful overall, because there often is more of one in many languages, but I will be more precise when speaking about C++ specifically in the future. I would also still argue the same thing overall, that is, Rust does not have what C++ calls classes. Our structs are apparently even simpler than C++ structs.



It certainly makes sense to use struct and class to indicate what the type's role is. It's just good to keep in mind to avoid the confusion when you find the struct that has only pure virtual functions as members.


C++ has to have structs because it wants to be (mostly) backwards compatible with C. I think that's the main reason they exist in C++.


That is correct.


I mean, Mozilla migrated parts of Firefox to Rust, that's the big one. I hear it might start being included in the Linux Kernel soon.


It's not entirely true though, most of the code base is still C++ and there is no plan to migrate the rest.


Isn't that exactly what makes it successful partial migration?


You need to defind partial, because 90% of the codebase is still C++.


Some thoughts:

1. Firefox is a huge codebase. 10% of that is still quite a bit.

2. Some highly complex core parts of firefox such as the rendering engine are at least partly written in Rust.

3. The bits written in Rust are not all isolated from the bits written in C++. In places they intertwine at a function level of granularity.


Rust originated from Mozilla...


It's wise to use Rust in Security Critical Projects. Please see Google Fuchsia, they follow hybrid approach where critical stuff written in Rust and rest in C++


Since Android is Linux, will this become available to all Linux users?


You have bit confusion here. Android is based on Linux kernel, and changes on that is directly reflected to both Linux and Android users, it does not go to other direction. We will see if this code ends up on other use as well.


The kernel is a forked Linux, compilable with clang, it has LinuxSE and seccomp enabled by default.

Everything else, including drivers post Treble, doesn't have anything to do with Linux.


I don't know much about Bluetooth in Linux, but I had the same thought. It looks like the license on this is Apache, which I would guess could be a problem.


Very unlikely, the Android driver infrastructure is quite different from Linux.


Wow, this is great! Not just the memory safety aspect of it, but also that it's written with testability in mind.


I have no opinion on this, but a data point:

I had been sticking with an old USB wireless headset for a long time because it Just Worked (tm), but even with a new battery the life wasn't really up to the modern work from home. So I chanced it with a bluetooth noise cancelling headset.

My first experience using bluetooth under Linux in probably a decade. It has been super reliable. I used the CLI tools "bluetooth_ctl", I didn't have a button to click in my i3 setup.

Not saying this rewrite isn't needed, to be clear. But I've been surprised how reliable it's been.


I've always been annoyed at the cross-platform story for Bluetooth. GATT is one of my favorite protocols because it is so simple, but writing simple code against this simple protocol is _not_ portable:

iOS and macOS have CoreBluetooth, Linux has BlueZ, Windows has Windows.Devices.Bluetooth and Android has android.bluetooth.

I've seen a few projects trying to fix this, like https://github.com/deviceplug/btleplug, and I hope one of them becomes production ready.


From my experience the Android Bluetooth experience is much better than the Linux Bluetooth experience. Even my ancient Samsung S3 work better than a recently bought Bluetooth dongle for my desktop and I only bought the new dongle, because the experience with the onboard Bluetooth was so horrible.

Sometimes I wonder how a technology that is so common on modern devices is still so unreliable.


Will this allow higher bitrates for audio without the arbitrary quality restrictions of the current stack? A developer has even implemented a patch for the old one, but never got a response from the dev team: https://itnan.ru/post.php?c=1&p=456476


Why not BlueZ, that everyone else uses? Android feels like the land of Not Invented Here sometimes, to me. There might be good reasons, but often it feels like it's just Google trying to insure Android is as incompatible / different as possible from everything else that runs Linux.

It doesn't help that almost none of the ChromeOS / Android subsystems or tools have not made it to any mainstream / regular Linux. They remain Google-only products.

I wish this company doing so much Linux work would be part of some broader community. There's some reciprocal question, of how hard it would be, why haven't other people gone in & say picked out some of the, say, ChromeOS containerization tools: how much effort has the world made to use the offerings in these mono-repos? Community takes two. But it still feels incredibly weird, so against-the-grain to see such an active but non-participatory Linux user in the world.

Backkground chit-chat aside though, technically what (if anything) makes BlueZ unsuitable for Android? Why is Google on their fourth bluetooth stack (NewBlue, BlueDroid, the Fuchsia one, now this)?


GPL is the problem. Hardware vendors won't touch it with a 10k foot pole because of the requirement to redistribute patches.

There's a history of wanting BSD licenses at Android's inception. If the BSD distributions hadn't run into problems relating to legal battles at the time, Android would be built on Mach with a BSD userland rather than Linux. Additionally, there was more vendor support and drivers for the Linux kernel than Mach. Sadly, for the fledgeling enterprise that was Android, it was better to start from Linux, and use Apache/BSD style licensing and write their own userland.


> If the BSD distributions hadn't run into problems relating to legal battles at the time, Android would be built on Mach with a BSD userland rather than Linux.

What legal battles were there? Wikipedia puts Android being started in 2003 [0] and then only legal battle I recall with BSD having settled in 1994 [1].

[0] https://en.wikipedia.org/wiki/Android_version_history

[1] https://en.wikipedia.org/wiki/UNIX_System_Laboratories,_Inc.....


Android's development started long before Android became a company officially, and there was more fallout from those BSD lawsuits than what is officially reported in Wikipedia -- especially since the settlement agreement included that those that agreed must keep silent.


Also many hardware vendors use their stack to differentiate themselves from other competitors as at the end they tend to 'do' the same things. That stack keeps at bay people just copying their design wholesale and then undercutting their margin and using their drivers. Then even if they are willing, they many times included some lib they bought from someone else. They may even have the full code to use and have changed it as needed. But that 3rd party is usually some consulting group and guess what one of the very few things they sell is. It is a huge mess.


> Sadly, for the fledgeling enterprise that was Android, it was better to start from Linux, and use Apache/BSD style licensing and write their own userland.

Curious about your take on this. Why do you think it would be better if Android were mach kernel + BSD userland akin to Mac/IOS instead of Linux?


Android used BlueZ in the earlier versions, but they changed to bluedroid for reasons I don't know, but I believe it was (in part?) developed by broadcom and thus was probably better supported (licensing probably played a huge part, too, since bluedroid uses a more permissive license)


Part of the reason was that Broadcom could directly support the connectivity guys. We just didn't realize how awful it was, but given that the the Android team didn't have that big of a connectivity team at the time, it seemed like a good idea.

The Glass connectivity team (of which Zach and I were a part of) actually had a few more engineers than the main Android team did, and given that connectivity was absolutely critical for our device, we had the strength to stand up to this mess and plumb its depths, and Zach was a key driver in most of the rework. Lots of our changes made it into mainline, and when Glass ended, I left Google and Zach kept up the fight, moving closer to Android proper.

BlueZ, btw, has its own problems all throughout the stack, and unfortunately suffers from political issues w.r.t. the hardware vendors.


I don't have the foggiest about the politician issues. I was shocked to hit up the bluez repo copyright in the README[1], from the beginning 2000, to 2001, then a Qualcomm email address for one Max Krasnyansky until 2003. Practically ancient history but still a very interesting detail to me that I would never have guessed

I really really really appreciate you writing in. It feels.lile there is so little to go on, so little available to understand the weird twists & turns of how the world, the software world especially, developed. A little bit of background & insight is so refreshing to hear. Thanks again!

[1] https://git.kernel.org/pub/scm/bluetooth/bluez.git/tree/READ...


does this work relate in any way to the ChromeOs's recent, abortive, new Bluetooth stack?


I get what you're saying, but a project as big as Chrome OS isn't really suited for a lot of outside cooperation and communication. More often than not, you want to ship a feature as quickly as possible, ship it to canary, then dev, beta and stable at some point. Community run projects require a lot more back and forth and you give up some control over your schedule if you involve them, if you want your stuff to get merged.

Firecracker however is based on Chrome OS' container solution and lives on as an OSS project run by AWS.


Bluez these days is pretty tightly coupled to modern desktop linux. IIRC the only official way to talk to bluez is through dbus (there are still a couple legacy ways through shared libraries though). I don't think Android seriously uses dbus though so that would be a pretty big issue.

And as a person running the latest mainline kernel on their daily driver laptop--I would not want bluez running the wireless peripherals on my phone. I can barely keep a wireless keyboard attached and working on this thing... in 2021.


Is this because of memory bugs\vulnerabilities that target the stack or just because it was too old and junky?


Also, a strong point of Rust is just the way the types and the defaults are. Most people think the borrow checker is the only major thing, but Rust have this features:

* Plain Structs

* Rich enums (aka: Algebraic types) that leads to

* All replacements to nulls (Option, Result, Default(trait), Empty(idiom))

* Immutability and functional style as preferred when sensible

* Consistency in APIs by proxy of traits (all conversions going with Into/From traits, All iterables can .collect into all containers, etc)

and many things like this that make very productive to build good APIs when you get the handle of it.


Dunno what motivated them internally, but the bluetooth vulns that made news in Feb 2020 would have been prevented by safe Rust. See my comment from back then: https://news.ycombinator.com/item?id=22265572

According to dtolnay, there are only 4 lines of unsafe rust in the Rust component. It's a bit small though at the current moment in time, with 4 thousand lines. Most of the code is still C++. Note that it's "with Rust" in the headline, not "in Rust".


Junkiness and vulnerabilities go hand in hand.


True that. maybe I'll rephrase, was Rust specifically chosen to address memory concerns? I don't think it is too common in Android.


> I don't think it is too common in Android.

??? Android crash little things left and right, and have leaks just for existing.


I did a quick look on androidxref and saw a lot more .rs files than I expected. I haven't worked in Android frameworks stuff since around 5.0. I knew not too long after I left that world they started adding some minimal rust stuff, but looks like its grown a bit. Exciting. It's a perfect fit.


That's probably a major reason. But, Rust is also just a really nice language that's very pleasant to use.


shudder I'm thinking back to all of the nightmares I had trying to use Bluetooth (especially BLE) in the early Android 6 and 7 days. It was basically unusable because of _serious_ platform bugs and issues. I hope this time with a new stack it goes a little better.


Just found on my Note 10+ in Developer Options was a setting labeled "Enable Gabeldorsh - Enables the Bluetooth Gabeldorsh feature stack".

Just turned it on and will be reconnecting some things to see if it helps with some of the small issues I've always had.


> impl FacadeServiceManager

Rather Java-like. All we need now is a FacadeServiceManagerFactoryBuilder.


Good for security. Now we just need a protocol that actually works...


Realistically- how long, if at all would it be before this filters down to updates- only with Android 12?


After seeing this headline I installed rust and am reading the book.


Oh good I can't wait for the Android update which makes Bluetooth much much worse. Not because it's written in Rust, but because Android just keeps breaking stuff on my phones every time I get new updates.


> Why is gabeldorsche plural?

> Please see this informative video we've prepared[0].

0: https://www.youtube.com/watch?v=vLRyJ0dawjM


Is this shipping in Android S?


It's shipped in Android 11, but in disabled state.

On Pixel, developer options, bluetooth, enable "Gabeldorsh" if you want to live on the bleeding edge.


Huh, I just poked around and saw that option on my Pixel 3a.

On the one hand, it might be interesting to try it. On the other hand, at least with the one BT device I regularly use, the current stack works flawlessly...


What is Android S? I see no reference to this version of android anywhere


It's what Android 12, the forthcoming version, would have been called if they'd kept the letter-based codenames.


Android 12, whose developer previews have started.

Source: https://developer.android.com/about/versions/12/overview


Crar'lie*


Neat


nice


BT in Android is proper shitty. We need it updating as quick as we can. The mic input for Android is unusable on most phones. To ad to that the SQ is just bad on all android devices.


Nice to see this.

Sadly the Android team, while taking this safety steps, it keeps using unsafe C userspace for NDK APIs.

Security is as good as the weakest link.


This is false and a meme that continues to stall progress. Hardening the BT stack means that accidentally crappy or adversarial devices cannot use a buggy BT stack to pop a device, from kernel mode remotely.

This is a hugely welcome change. The threat model from a app using the NDK is much different than having a drive by wireless attack.

Defense in depth and put focus on protocols and parsing, the rest of our stacks will come in time.


The meme that stalls progress is the myth of the perfect C developer.


> Sadly the Android team, while taking this safety steps, it keeps using unsafe C userspace for NDK APIs.

You phrase this as a negative but it's overwhelmingly a positive. Imagine how difficult it'd be to write an app using the NDK in Rust if the NDK had been C++ instead. The C ABI remains by far the most portable & common target. Everything can call it.


It is a bit hard to talk about safety when everything one has to play is an API that allows for all the usual stuff that C is known to cause.

Everything can call it, and everyone has to redo the safety work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: