Damn I didn't know Spotify had implemented DRM. Back in the day (7 years ago maybe?) they had a low-level C SDK available for anyone to use, I learned alot about data streams and wrote my first ring buffer trying to capture and process the PCM data.
Unfortunately, libspotify was deprecated years ago and was never given a proper replacement, despite promises to do so. Now all they have available are platform-specific SDKs that can control Spotify devices, which are exceptionally awkward on mobile platforms where in order to play anything, the Spotify app has to remain open in the background.
It really bummed me out because the potential of a language-agnostic cross-platform library for legally streaming music is huge. It essentially decouples the service from the UI.
The embedded sdk is a dumbed down library providing playback, control is performed using Spotify's apps (via Spotify Connect). Libspotify provides playback and control i.e. everything needed to implement your own version of Spotify's app (but it predates Spotify Connect so has zero support for it).
It's not much of a replacement. And as you say, there is no public access anyway.
It seems like a particularly odd thing to do as well since there has never really been much effort to DRM the audio path end to end. At least in video it takes some special hardware to capture the output and if you don't crack protection at the source bitstream you're either going to have a ridiculously big lossless file or degradation.
I remember that they'd pack the executable with Themida with the highest possible settings in order to complicate reverse engineering.
Not really a huge hurdle nowadays, but they used to have the VM and anti-debug detection ratcheted up all the way and it would trigger when I had IDA open. (I believe it was a findwindow check)
I wonder if there'll be a renewed push for this in the next decade. At the moment it's pretty pointless given the widespread continued existence of headphone jacks which have a pretty big analog hole, not to mention the continued existence of stores selling DRM free protests after the backlash in the 00s
The analog hole isn't going away. At some point, the audio signal needs to get decrypted before going into the electromagnet that drives speakers. Even if you can somehow protect the electrical and magnetic signals, capturing a high quality audio signal from actual aucistic waves in the air is much simpler to do than capturing a visual signal from the light being emited from a monitor.
> It seems the Pi’s raw CPU frequency is still not powerful enough for decoding 100% of the time. While 97-98% of the time is good enough, you will get the occasional “screen tearing”
I don’t think that’s inadequate hardware performance. I think that’s Linux GPU stack. More specifically, the parts where hardware acceleration integrates with that decades-old X11.
I once made a toy project for Pi4 that can render GLES content either with or without desktop manager: https://github.com/Const-me/Vrmac/ I did observe occasional tearing on desktop, with both 3D content or accelerated h264 video. Maximizing window into borderless fullscreen didn’t help. However, rebooting into console and running the same code on top of DRM/KMS without X11 resulted in no tearing.
As far as I understand most composition managers under X have issues with screen tearing not because they are based on "decades old X" but because they do not even attempt to sync to the screen refresh rate. Some just run on an internal timer, which is a bit like catching a train by walking of the platform at random points.
If you tell the compositor to fuck of by overriding the windows redirect state or have a compositor that at least detects full screen windows you generally get no screen tearing at the cost of everything the compositor normally does.
> because they do not even attempt to sync to the screen refresh rate
I don’t disagree with that, but I think the underlying reason is X server protocol. It’s hard to implement vsync properly when there’s a socket connecting application to display server.
On Windows, various GPU APIs and even parts of the GPU driver are DLLs loaded into the process. DLL functions don’t have latency and API designers don’t need to consider that (except stuff that have good reasons for latency, e.g. asynchronous draw/compute/copy calls). Works quite well in practice, I don’t remember screen tearing issues on Win10 unless I explicitly disable vsync, e.g. with DXGI_PRESENT_ALLOW_TEARING flag. Most apps don’t do that and don’t tear.
The problem statement “implement proper vsync” only looks easy on the surface. If one considers all the hairy details, a good implementation gonna affect many components of the OS, not just GUI related also power management and others. Otherwise it gonna introduce presentation latency (especially bad for online games), consume much more RAM and VRAM, and/or waste too much electricity (especially bad for laptops).
> If you tell the compositor to fuck of by overriding the windows redirect state or have a compositor that at least detects full screen windows
If a developer is in a position to replace OS components, that’s probably an embedded environment. For that case DRM/KMS combo is already flawless, at least according to my experience. That’s not just on Pi4, I’ve developed stuff for a few other ARM SoCs, too.
Desktop software developers can’t replace OS components or ask users to do so. They need something that works out of the box, is reliable and efficient.
> It’s hard to implement vsync properly when there’s a socket connecting application to display server.
That would affect all X11 based applications, yet somehow screen tearing consistently seems to disappear as soon as I bypass the compositor.
> Otherwise it gonna introduce presentation latency (especially bad for online games)
I once went around looking how much latency various Linux desktop environments introduce. The most widely used ones (KDE and Gnome) on their default settings are outright catastrophic. KDE lets you disable desktop effects and the compositor, Gnome only bypasses it for full screen applications. As always eye candy > functionality.
Hardware is fast enough when it can run your software fast enough to do what you want. :)
Approximately all software in the world is very suboptimal, as is occasionally proven by people who make things go fast on slow hardware for fun and games (eg see 3d engines on c64 or realtime video decoding & streaming from floppy on 8088)
I’m not an expert in Linux graphics, but I spent about a day trying to fix that but failed.
eglSwapInterval does nothing. The GPU driver exposes no *_swap_control extensions anywhere. There’re 3 extendable parts, EGL, GLES client and GLES server with many extensions each, but nothing resembling GLX_EXT_swap_control/GLX_MESA_swap_control/GLX_SGI_swap_control.
Other applications on that Pi4 who render many FPS (web browser or VLC player) also have screen tearing. I assume these people know way more about Linux GPU stuff, yet they failed too.
Do you have an advice how I can do that on that Pi4, with EGL 1.1 and GLES 3.1?
P.S. Based on the internets, the problem was introduced in Pi4. In earlier versions things were good.
For Spotify I use Raspotify[1] which uses librespot library. It works flawlessly, controlling what to play from any smartphone on same local network. After last update finally podcasts work too.
Netflix I watch in Kodi Netflix Addon. Worked great so far with no screen tearing.
To be fair, a separate machine is exactly how you should run proprietary blobs. Although I certainly wouldn't want the increased administration overhead of tinkering with this when it inevitably breaks, compared to the Free solutions that are generally rock solid.
Was also doing this for ages but eventually caved since spotify is so so much easier. The DRM on spotify is really stupid since every single track on spotify is trivial to find a torrent for yet no one bothers torrenting since spotify is cheap and very convenient.
That’s the only way to combat piracy, sadly Spotify is becoming worse and worse, they ruined radio a few years back and over the past year they’ve also ruined discover weekly and your daily mixes it’s the same songs on repeat now, and the mixes are just becoming weirder and weirder they seem to broken down genre barriers completely I have Weird Al, Rammstein and MCR popping up in the same daily mix.
They’ve stopped rotating their curated playlists and made discovering playlists more and more annoying.
I’ve stopped paying for Spotify because of that and now primarily use Amazon Music Unlimited for music on the go but it also now started to have issues with too tracks becoming unavailable day in and day out.
I wish Pandora would become available outside of the US or Spotify Stations (I don’t know if it’s good, I’ve used Pandora back when they didn’t geoblock their services) because I want to be able to discover music.
Since the lockdown in the UK I’ve went back to using Internet Radio, my Yamaha Music Cast receiver has a good directory and it’s so far been a much better daily listening experience than Spotify.
I might give Napster and Deezer a try since they are supported by my main music setup at home too.
I found Apple Music just as bad for discovery as Spotify in fact I was surprised that when I did the trial last year when I would start a station from an artist or a song I would get pretty much identical tracks to Spotify it was so weird that I am almost sure Apple had access to my listening history from using Spotify on my iPhone.
Amazon seems to work right now and I was pleasantly surprised that Spotify et al hasn’t killed internet radio yet.
> every single track on spotify is trivial to find a torrent for yet no one bothers torrenting since spotify is cheap and very convenient
The bit about "every single track" isn't true. It's hard to find torrents for older, more obscure things.
Your general take is right, though. It's easier to pay Spotify $10 per month than to go to the effort of torrenting, cataloging, transferring to your device, etc.
> It's hard to find torrents for older, more obscure things.
Maybe this falls in to the hard category but once you make it in to the private music tracker, you have everything trivially accessible. I still have an account on the tracker but I just don't bother using it since spotify is so cheap and useful. It just makes no sense that they would implement DRM since it does basically nothing since no one was prevented from piracy through DRM.
We bought a Raspberry Pi 400 (the one in the keyboard) and a 27" monitor over Thanksgiving and that has been our TV for the kids that past couple months. Works great. Netflix, Amazon Prime, Youtube.
Don't remember how I got Widevine installed exactly, but if I remember right, it's a blob ripped from ChromeOS, which is why it only works for Chromium and not Firefox.
I wish people would present their work better, this post is surely useful but the first few lines are tersely written "Last update x.x.2020 Thip! Crinkle! Spoit!"...
"but not for ARM since technically they don’t have ARM builds"
What happens if you install Chrome on a new (Arm64 M1) Macbook? This is the sort of situation where I am looking forward to Mac's causing improved ARM support in general.
Not sure about Chrome, but the ARM build of Firefox still uses Rosetta to execute the DRM modules in a separate process. Spotify also does not yet have a universal binary.
Spotify's own clients basically download encrypted data, decrypt with the song's key, decode the vorbis and write PCM audio to the audio device like normal. Does anyone know how Widevine fits into this? What does it actually do in the case of audio (video I can imagine is different since there's DRM support baked into the output device, as I understand it)?
So they serve two versions of every file, one for browsers using Widevine's scheme and another for everything else using their own scheme? And the benefit of Widevine is that it's a closed-source blob shipped with the browser itself (or not, as is the fundamental issue solved by the OP) rather than some obfuscated javascript, so you have zero access to it? At least until it ultimately writes PCM to the soundcard where you can then do what you want with it.