I spent some time live-streaming a (now dead) project this year. I live-streamed almost all the dev and created an archive indexed by commit (https://ctzn.network/dev-vlog). Here are some other non-technical tips:
- Don’t livestream everything like I did. You’ll burn out. But do pick a regular schedule like once a week and stick to it. You’ll eventually get regulars and that’s fun.
- Use a service like Placeit and create video bumps. I had bumps for the intro/exit and for new commits. It adds a lot of fun and energy.
- Stream on Twitch and then publish the recording on YouTube, if you can. Even better if you can publish focused or editing clips on YT. I streamed entirely on YT, which works fine, but the vast majority of viewers were after-the-fact.
- Get used to coding out loud.
- Zoom everything, especially your code, terminal, and dev tools. It’s a rotten way to work but people won’t be able to see otherwise. (Most people aren’t watching full screen.)
- For that matter, most people aren’t watching the full time. They tune in and out. I liked to start the stream with Excalidraw where I would map out the days work.
- Turn on Do Not Disturb and move any sensitive windows to another monitor. I had someone try to send a Twitter reset code with the goal of getting me to show the code on the video.
I had a spare raspberry pi 4 and compiled OBS on it just this weekend, the idea was to have a physically gapped machine just for streaming development (with no personal information on it at all, just the code). It runs OBS + VSCode surprisingly well! (unfortunately I learned my upload bandwidth is wholly inadequate for decent fullscreen live video but at least the pi can take it without overheating).
Are you using some operating system that doesn't have multi-user support by any chance? Otherwise, most modern operating systems allows you to have multiple users per machine, where their files are separated, so you could have one "normal" user you use for day-to-day, and one "streaming" user where you only do streaming from. That's how I make sure everything is separated for me when working VS streaming work (on Windows and Arch Linux, I'm 100% confident macOS has multi-user support as well)
I'd be using both my primary machine and the streaming one in parallel (on two monitors, using synergy as a kvm), the primary one for browsing, music, git and chat. Seamlessly moving mouse, keyboard and clipboard between setups beats having to switch users back and forth all the time.
I livestream a subset of my work on Asahi Linux [1], and I agree with almost everything you said and add:
- Read the chat, at least on and off. It also helps if you have friends who will ping you via an (out of band!) notification if someone says something important. I use both IRC and YouTube chat, and the IRC folks know that if they tag my name there I'll actually get a beep in my headphones and notice.
- Get used to people correcting all your dumb typos and mistakes before you do.
- If you're streaming your work, streaming shouldn't take time off of your work. Make the process as friction-free as possible. I have a script that I launch that sets most of the environment up. Then when I feel like working I just click a couple buttons and send out a tweet and IRC message.
- You'll probably want to have one virtual desktop or two you use for streaming, but make sure there's nothing sensitive on the topmost window of any others in case you switch by accident. I accidentally showed my email inbox once; it was all Asahi stuff anyway on the first page, but it could've not been.
- If you do end up having to trim something out of a recording on YouTube, you can do that with the built in editor tools without changing the video URL. However, you will lose the chat replay.
- Announce ad-hoc streams a few minutes before you actually start anyway. That way you'll get at least a few people listening in from the get go.
- If streaming on YouTube and you suspect your internet or software might act up, use a scheduled stream even if you create it just in time. That allows you to reconnect even after some downtime. Ad hoc streams end automatically a few seconds after you disconnect, and then all your viewers get kicked out and will have to navigate to a new video page.
- If you do use a scheduled stream, don't forget to click "go live". That was a duh moment, I was speaking to nobody for a solid 10 minutes.
- Even if you don't stick to a consistent schedule, schedule some streams when you're planning to do something "interesting". E.g. I scheduled my M1 Pro bring-up and that got a lot more viewers than my usual streams.
- Audio quality matters. A lot. It makes the difference between a coding stream people will leave running even if they aren't watching the screen, and one where they'll get tired quickly. You don't need expensive hardware. Just a decent (vibration isolated!) mic mount and a cheap dynamic mic will do (I paid $3 for mine at a junk shop) - and processing. EQ, gate, compressor, maybe some multiband gating or compression (I use that as a trick to hide fan noise).
- Monitor your own audio, at least initially until you're confident in your set-up. If your latency is low enough, it shouldn't be distracting. I don't bother these days since I know my set-up works, but I do use semi-open headphones so I can still hear myself talking, and do a test at the start of the session.
- It helps if you stream from one PC and code from another. I use an HDMI scaled mirror output into a capture card on the streaming PC. I've had cases where I literally had to reboot, while the stream kept going (though my audio stopped because I run it on the main PC, but that's a practicality of my setup - you probably shouldn't do it like that)
And as I've found out twice already,
- Apparently YouTube's Content ID likes to false positive match keyboard typing noises against other keyboard typing noises (in "songs" that aren't). Dispute them when that happens, and complain loudly on Twitter/HN :-)
> Stream on Twitch and then publish the recording on YouTube, if you can.
I think this is mostly about where you have your audience. I do YT only (because I'm too lazy to separately upload recordings) and I regularly get 30+ live viewers, up to 100 sometimes, which is honestly pretty good for a coding livestream. I have a bunch of regulars and then there's always a bunch of random people.
> Audio quality matters. A lot. It makes the difference between a coding stream people will leave running even if they aren't watching the screen, and one where they'll get tired quickly. You don't need expensive hardware. Just a decent (vibration isolated!) mic mount and a cheap dynamic mic will do (I paid $3 for mine at a junk shop) - and processing. EQ, gate, compressor, maybe some multiband gating or compression (I use that as a trick to hide fan noise).
Essential advice. Took me a full two weeks of setup and experimentation to get voiceover audio I was happy with. If you’re new to recording or live-streaming, highly recommend you set aside a similar chunk of time to get that right.
> I had someone try to send a Twitter reset code with the goal of getting me to show the code on the video.
Wow -- that seems like it would be a very low risk/effort for pretty decent chance of success attack. Experienced streamers will likely have all notifications for stuff not directly related to their stream turned off anyway, not just because more security awareness but also because it's a distraction; but does anyone know if less experienced people get regularly caught out by this?
I don't get how this could be used as an attack, unless whatever is displaying the tweet is vulnerable somehow? What kind of "reset code" are we talking about and why would just displaying it be harmful? Executing it I can understand, but just displaying?
Edit: I think I just figured it out, it's referring to "password reset code" that you can get if you try the "forget my password" option on login pages (not "code" as in "software code"), displaying that code would allow others to (possibly) set their own password for your account, would make sense as an attack vector in this case. Duh.
> So, what did I do? I switched from Logic to Reaper for my live mixing. Reaper is very cheap ($65, and you can just use it for free if you want), extremely streamlined (it weighs about 35 MB to Logic’s many GB), and has worked perfectly for me. I haven’t had one bad moment with it. It’s what they’d call robust. I’d recommend it for this purpose to anyone.
Guess who created Reaper? Justin Frankel. Guess what else he created? Winamp. Software like this is missing in today's era. Just this one piece of software is the counter arguments to the gazillion gbs of electron apps making the rounds today.
I wish we could go back to efficiency and small sized apps with big ambitions.
What GUI library, if any, is used by Reaper and Winamp? Or is it low level graphics code?
I feel like that's the biggest hurdle; you can get web developers by the dozens to craft specialized UIs, but any native GUI will be Windows only, some Mac but only because they're familiar with iOS, QT but they're likely a dying breed, etc.
I don't feel like there's a strong alternative to cross-platform webapps (in electron) at the moment.
That said, I do strongly believe that any serious company (think Slack, Discord, Spotify) has the financial resources available to create native applications for all platforms, with a web based version on top. The issue there is that it's really hard, if not impossible, to get a consistent style on all native development platforms; the designers will insist on their company wide design standards / language.
Therefore, native GUI toolkits should allow for the flexibility of web/CSS styling.
I've been trying to create a platform-agnostic styling language for UI designers. The idea is to create a flexible-enough representation of GUIs that each platform could implement it while maintaining their own visual identity.
One thing that has held me back is that I'm strictly a web developer - never built a native GUI. So I have no idea how realistic, feasible, or even desirable that is. Your comment makes me think it could actually be useful.
It's certainly very desirable, which means the fact that it doesn't exist already is a huge red flag. Looking at the mobile space as a corollary, there still isn't a decent cross-platform Android/iOS UI library. Is Xamarin even still a thing? Not only would you have the same issues but more with Windows v. MacOS given the long legacy history of Windows UI compared to Android, but if you want it to be truly cross-platform you've got a couple thousand incantations of Linux-style systems to deal with as well.
Based on market share alone[0] going Windows-only seems like a totally fine tradeoff if you want a native GUI.
The only cross-platform UIs I know of are Flutter and QT. Yeah the red flag scares me, especially since I'm not well versed in native/low-level graphics. There must be a good reason why it hasn't been created yet.
Electron applications almost never use native controls. As such, this part of the argument does not matter. Given that, there is GTK, QT and whatever Java thing is in now that you can use to make cross-platform applications that function perfectly fine without being a super-heavy disaster.
One of the most interested/underrated side effects of the pandemic is that it has forced a lot of people to become somewhat knowledgeable on things like camera quality/positioning, lighting, and sound/noise cancellation.
I was pretty amazed at what Bo Burnham could do by himself on his Netflix special given a year and change to experiment with doing all of the A/V work for a one man show in a small room.
I got involved with a local DJ event shortly before the pandemic doing lasershows, and we ended up having to go online immediately thereafter. I had some experience with OBS from using it to composite stuff for video calls (pre-pandemic), so I offered to help run an online version. And then I had to figure it all out.
The way we end up doing it, DJs run their own sets at home (from different countries), then stream to an RTMP server I run on GCloud. I pull those feeds from home and composite them with transitions to the next DJ, then stream to a single Twitch channel. I had to teach myself not just the tech part (which was okay, I'm used to tech), but also figure out how to coordinate DJ transitions in real time with them, while MCing introducing the next DJ and simultaneously crossfading the audio and triggering the video transition. There's a lot of "Okay, 5 minutes to go, you're live on the audio mix. Start when I'm halfway through your introduction text, since there's 2-3 seconds of latency. I'll give you a live monitor feed of the stream audio in real time via Discord, use that as your cue. <5 min later> <radio voice, reverb on> Aaand that was DJ foobar's amazing set! Coming up next we have <... moving faders at the same time...>" Sometimes it works better, sometimes it goes a bit wrong, and it's definitely not something I'd ever done before :-)
And DJs had to learn about cameras, compositing, audio capture and avoiding ground loops, etc, as well as the whole OBS side. I have a whole tech doc I send them with the OBS-related info, and we do test runs where I check that their audio/video is decent, their connection to the server is good, they have the A/V sync set properly, their audio level is good... Some other folks in this area have put together more general documentation on the audio setups and whatnot.
Also sent in a few patches to OBS, including one fixing a years-old unacknowledged bug that would silently kill audio in streams randomly after a few hours, depending on the exact setup. That one killed one of our shows 3 times and I vowed it'd never happen again. Turns out it was an infamous bug in the DJ restreaming community... that the OBS devs had always maintained was user error...
If anyone's interested in the details of running this kind of event, here's a presentation I gave with 3 other folks from the community: https://www.youtube.com/watch?v=5lHn73MsE7c
This is definitely the case. It's not like the guy doesn't know anyone in the industry. A few meetings with some of his buddies for rules of thumb with lighting placements and camera angles goes a long long way.
Since this is a tech site, it's probably akin to a programmer calling his infra buddy on the basics of setting up a semi pro home network. "Well you'll need a router, firewall, switch, NAS, and a server. Go for ESXi on your server, and don't worry about VLANs unless you want to set up a guest network. Gimme a call if you have any questions."
Nothing happens in a void. Bo did a great job, but he didn't just wing it. And that's ok.
The overwhelming majority of live streams today is content created on a PC by gamers/entertainers, for gamers/patrons watching it on a PC, and in this context you can safely stop worrying over the two most pressing issues - 29.97 vs 30 fps and partial vs full range. Just go with 30 or 60 fps and full color range, and everything will be optimal for both you and your viewers.
Or just go with 24fps, which plays fine on 30fps displays, uses less bandwidth (or the same bandwidth and has higher picture quality), and looks "better" due to 24fps being the frame rate of movies (and 30fps being the frame rate of straight-to-video daytime tv).
30 or 60 makes sense for fast motion, but most things people are shooting are talking heads and stuff like that, which don't really benefit from higher frame rates. 24 is good.
>and looks "better" due to 24fps being the frame rate of movies
I think this is just Stockholm syndrome from people used to bad frame rates. If the content was originally in 60fps don't butcher it by converting it to 24fps.
Sorry but thats the worst reasoning i've ever heard when it comes to this FPS debate. 24fps is a bad framerate. Even for movies with the "prerendered motion blur", which helps to smooth the video a bit, its a stuttering mess. Comparing it to watching static photographs makes no sense, an inherent quality of video is it not being static.
Don't forget that 46 fps was determined by Edison to be the minimum viable framerate to not cause eye fatigue but then it was decided to go down to 24 to lower the cost of movie productions. 24fps is literally just the low budget option.
The higher the fps the smoother the video the better the video. I'd pick 720p60fps over 1080p30fps any day if im somehow limited by bitrate restrictions.
24fps is definitely not just a low-budget option. People with very expensive cameras capable of shooting 60fps without any issues at all are still CHOOSING 24fps.
The debate about 24fps vs higher frame rates is nuanced. It's about trade-offs and style choices. Here's an example of a videos discussing 24fps vs higher frame rates:
24 Hz is not necessarily a bad frame rate (depends for what, really), but even disregarding that most display technology is for 30/60 Hz, I don't think there is any argument that 24Hz is not worse than any higher framerate in terms of visuals. Just like 60 Hz is worse than 144 Hz or even higher rates. The difference is very noticeable.
What's worse, how do you intend to display 24 Hz content on a 60 Hz monitor for example? Display one content frame on every 2nd or 3rd monitor frame? That will lead to speedup / slowdown. Maybe display one content frame every 3rd monitor frame, but drop every 6th content frame? That will display the content at proper speed but the result will be a jerky watching experience. Maybe think about interpolating the content by computing in-between frames dynamically? I'm not sure to what extent this is done in existing devices, but I'm positive the watching experience won't be optimal.
If it was somehow proven that the human perception system "samples" individual frames at a fixed framerate of 24 Hz, then you could make a case that 24 Hz is preferable to 30 Hz, but I'm pretty sure that is not the case.
I remember playing Halo on the original Xbox. I was distracted for weeks because it was an unpleasant jerky experience. About once per second there was an annoying jerk when just walking in a straight line. At some point I noticed that on a friend's Xbox Halo ran very smooth - at least during scenes that hadn't a lot of action going on. I was already considering that his Xbox was a newer, stronger model, but then I noticed my friend had his Xbox set to NTSC mode. I'm from Germany, where as most of Europe the usual video system was PAL (slightly higher resolution and running at 60 Hz instead of NTSC which runs at 50 Hz). I set my console to NTSC for Halo and it fixed my issue. The whole thing might be an issue in Halo's engine, maybe they weren't able to fix this in time before the release. Or I don't know, fixing it might have caused huge headaches with multiplayer. In any case this situation might illustrate why you don't want to put 24 Hz video on a 60 Hz screen if you can have 30 Hz video.
NTSC is 29.97 fps and PAL is 25 fps - PAL was higher resolution because it sacrificed frame rate. (Doesn’t change your point of course)
(Back in the cathode days seeing a TV set in Europe was like going to a strobe show.)
As TFA says (well maybe it was in the comments) there are standard ways of conversion and you pretty much hit them but newer equipment will be able to playback 24fps natively to give the “cinema experience”.
> What's better, a gallery exhibition of 20 photographs or 2000? It doesn't work that way.
Yes, it doesn't work that way, the number of frames you have and the number of pictures you select for a gallery exhibition are two totally different things, the only thing that they have in common is that both are a number of pictures. Unless you consume video content by pausing and watching it frame by frame, which I doubt you do. And even in that case, a lower framerate doesn't equal more "work" being put into each frames, which would be the case for the gallery exhibition, but isn't the case for "livestreaming", since you don't have the time to do that when doing something live.
> What I think has most surprised me about livestreaming, as someone who has lived and breathed live performance for most of his life, is how authentically live it feels to me as the performer. I feel the presence of the audience watching me through the ether. I feel their gaze, their judgement, and I swear I can feel their joy when I’m particularly in the zone and the music I’m making is really speaking from the heart. It sounds crazy, but the livestream experience has felt even more intimate to me at times than it has to play in a jazz club or a concert hall.
I wish I felt like this :(. To the extent to which I do--and maybe that is all he is feeling also, so maybe the real issue is that I just don't feel it is "sufficient"--is a kind of "muscle memory" for what being in front of an audience is like, in that if I close my eyes on stage I am sure I wouldn't immediately panic... but it really does feel alone and isolating for me to only have my own video and see nothing but the occasional chat in response. I know people are out there, but I don't have a feel how they are reacting--again, other than as a prediction of "audiences tend to work like X" as I have experienced them enough to mentally model them--to feel like they are still on the journey with me, which is, if nothing else, a lot of the fun on my side :(.
Another mostly free option for playing and recording real time audio from multiple sources over the internet (with live broadcasts) is https://jamkazam.com
I have never been able to make good with thr lighting + camera + background trifecta for my vids. One is always off, and it drives me nuuuuuuts. Hoping the vid linked to in the article will help, even though it's about the 100th I've watched <sob> <sob>
“ I realized I could fix it during the stream by resetting the sample rate of the project, but even if I did that, it could reoccur again at any moment and was a constant source of worry, something I didn’t need more of.”
I had similar problem with delay will in logic which gets fixed by changing the sampling rate, but the real fix is by increasing the buffers in pref.
You're not crazy, but my guess is that the author has chosen the after to more closely mimic the stage for a performance, which they were missing once everyone went home.
Here is a tip for white balance. Where possible: try not to sit too close to a large screen or at least maintain a static distance not moving your head much relative to screen. The dynamic changes in blue-ish screen lighting wreak havoc on some skin tones.
One minute I'm Casper and the next I looked like I bathed in Mountain Dew!
It's annoying that I can watch great wireless video from my camera on my cell phone but can't stream with it. Also phone's external microphone usage is very hard and doesn't work on almost any video recording software.
So people resort to bulky hacks like HDMI capture cards etc.
We have all the hardware imaginable but are in a weird software rut.
In fairness, some of these limitations are due to using devices/cameras primarily meant for one thing to do another. Regardless of all the cool things they can do, smartphones were not built to handle all of the live capture and transcoding that you can do easily with a suitable camera + desktop/laptop.
I haven't used my phone's cam/mic for recording or streaming very much, but I know you can get phone video and audio into OBS via obs.ninja if both phone and computer are on the same LAN. You might be able to use your camera's video and your phone's mic audio as OBS inputs, then send the output of OBS into whatever recording or streaming application you need.
Or just get a dedicated mic. I haven't kept up with the market for a while but we used to always just get those Zoom (no relation) mics and hook them up to the audio-in on our DSLRs for cheap(ish) decent audio. I'm sure there are better options nowadays.
I thought this was a useful article for a beginner, and sent it to my son who's a jazz pianist in NYC. instantly I get a text back saying he knows the writer — Dan Tepfer — and is indeed going to an Alan Hersch performance with Dan later this week.
The 16-235 brightness value issue I've heard about before and indeed, it's absurd to me as well. Especially when banding is such a huge issue on 8 bit video. We could use those values so much in the lower end.
- Don’t livestream everything like I did. You’ll burn out. But do pick a regular schedule like once a week and stick to it. You’ll eventually get regulars and that’s fun.
- Use a service like Placeit and create video bumps. I had bumps for the intro/exit and for new commits. It adds a lot of fun and energy.
- Stream on Twitch and then publish the recording on YouTube, if you can. Even better if you can publish focused or editing clips on YT. I streamed entirely on YT, which works fine, but the vast majority of viewers were after-the-fact.
- Get used to coding out loud.
- Zoom everything, especially your code, terminal, and dev tools. It’s a rotten way to work but people won’t be able to see otherwise. (Most people aren’t watching full screen.)
- For that matter, most people aren’t watching the full time. They tune in and out. I liked to start the stream with Excalidraw where I would map out the days work.
- Turn on Do Not Disturb and move any sensitive windows to another monitor. I had someone try to send a Twitter reset code with the goal of getting me to show the code on the video.