Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've gotten mixed reactions whenever I share this, but when I program I like to record my screen with OBS.

* It's a mental hack to keep me accountable, especially now working from home. If I'm in an office anyone can look over and see whether or not I'm working. It started as an attempt to mimic this feeling at home, even though I'll be the only one to ever see the recordings.

* It allows me to go back and see how I worked in the past. I have a few videos of myself working from 2015 which I think is pretty neat just because of how different my workflow was back then compared to now. I'm not using the same tools or even on the same operating system.

* I'm working on video games which is what makes this very useful for me. If something visually interesting happens, or if there's graphical bug of some kind, I can go back and breakdown exactly what happened. I've stepped through videos frame by frame in the past to debug, it's been surprisingly helpful.

* It allows me to go back and see my progress. I can know what I was working on a given day, see how far I've progressed, it's just generally a good motivator. You can of course do this with git, but if you're working on something visual it can be nice to see it in motion rather than a textual diff.



I discovered a while ago that all those errors and bugs that only appear when you demo something to an audience also magically appear when you record yourself demoing it to nobody. Maybe narrating a feature to a pretend audience takes the blinders off enough that you notice little mistakes you wouldn't have otherwise.


Very similar to the process of rubber duck debugging.


Back in my office days we was doing this constantly. The sentence "will you be my rubber duck" always put smile on my face


I often wish I had a pet dog husky so I could walk him through some tricky python and have him figure out my bug with me.


This is a fantastic tip, thank you! I'm definitely going to try this out.


Sometimes, I do a full screencap with my face when I am coding. Then at the end of all that, I will even do a reaction video to my full video.

Why? I REALLY enjoy the dopamine rush when you are struggling then find a solution. I see myself pulling my hair, staring blankly at the screenshot then at a random moment of pure luck I find a solution and it literally is euphoric.

I enjoy relieving those moments.


Nice to hear this! I am a big fan of screen casts as a way to do async updates inside remote teams. Tools like mmhmm an dyac are really good at this and its gives everyone a high bandwidth walkthrough of whatever you're working on however you do it so code, drawings, docs, etc works across a wide spectrum of activity. The rubber duck effects are a bonus!


What’re you guys working on that this is a regular occurrence? I just never run into walls like this when coding stuff up.


I'm working on a new collaborative editing engine for arbitrary data, using CRDTs. I'm trying to keep it highly performant. (Like, 300M edits / second performant).

I've been working on this problem for years and I've gone through dozens of design revisions. Sometimes I catch bugs in the design phase. Sometimes I only notice core problems after I finish implementing parts then attach a fuzzer. (Fuzzy boi is the best and the worst.)

Some design problems have taken me months to find a solution. One of my solutions ended up with me rewriting of thousands of lines of working code, written over several months.

We're making great progress; but sync engines are hard.


Debugging is huge part of this. For example, for web where you can plop breakpoint or print statement anywhere and have a good level of transparency into what is happening really helps resolve issues quickly.

Game development where GPU is not really going spill the beans of what is happening under the hood that easily - one can stuck for a longer times easily.


Yeah debugging or testing theories for production only bugs (without adding risk) can be a big source for "aha!" moments.


I'm writing Swift code that calls Apple's UIKit. I'm constantly wasting time trying to figure out how to use the API properly, since it's buggy and poorly documented. Each solution brings relief, not euphoria.


Sounds like hell


I find SwiftUI a lot more fun but run into similar feelings using it, documentation is maybe worse :/


Multithreading problems Algorithms problems where you have two variations of the algorithm that work for different combinations of inputs and you need it to work for both cases, so you need to merge the algorithms together. This is my most common programming experience. Realising how many problems a problem has, such as constraint satisfaction, layout engines and programming language design. Myers agorithm and diff3 See my profile for links to my GitHub profile to see what I'm working on.


Reducing latency. Or any sort of optimization for that matter. It’s usually a slow methodical process, but every once in a while you look at the right trace and instantly know what the problem is and how to fix it.


It sounds intriguing, but where do you find the space to store all the recordings? A bunch of external drives? Feels like 8hr/day × 20 days/month of recording my multi-monitor setup would fill up my drive pretty fast.


If you’re working on open source, stream straight to YouTube or Twitch. Can be private or not.

I do this sometimes. the accountability hack works even better if someone could be watching.

Bonus points is that it feels way more natural to narrate aka rubber duck problems when you are streaming.


x264/veryfast, 1080p 10fps at 2000kbps, is more than enough for plain text recordings and it won’t take that much space.

You can go even lower with other encoders (x265) + if you don't record audio at all


2000 kbps = 0.25 MiB/s = 900 MiB/h?

That's only 1.14 TiB per year doing it 5 h/week * 5 days/week * 52 weeks.


You only need 40kbps for acceptable audio, so I wouldn't worry about that at all in this ballpark of video bitrate.


If you record mono speech, opus can produce great results at 15kbps.


If you just want speech, yes. For cheap generic audio I don't think I'd push it below 32 unless I really needed to shave bits.


A while ago I tried to see what compression I could get out of screen recording losslessly to a scratch disk and encoding afterwards as slowly as I could wait. I didn't write any numbers down, but the difference in efficiency was significant. Some observations:

- Of the lossless encoders in OBS/libav, utvideo was the best in both CPU usage and efficiency, followed by lossless ultrafast x264.

- An SSD can handle even uncompressed 24bpp 1080p60, which is 375 MB/s. Typical screen content compresses well below the ~100 MB/s write speed of an HDD. Fullscreen video does not, instead gradually filling the write cache until either OOM or thrashing.

- For onscreen content, I prefer the bitrate tradeoff of keeping PC color range and not chroma subsampling.

This technique isn't as effective for this use case of recording several hours daily, since reencoding must be fast enough on average to keep up. Best to already have a home server (any spare desktop). Otherwise, use AOM codecs, known for poor multithreading, to encode at full speed without hogging CPU.

ps, temporal compression means that dropping framerate makes surprisingly little difference with modern codecs. But I really should be writing down the results of my ad hoc tests...


(3 days later): Necroing with a link to this serendipitous related submission about building a NAS for professional video editing from 40 TB of SSDs: https://news.ycombinator.com/item?id=32235158

Yea, on second thought, long term screen recording like this would be a terrible waste of silicon. At 150 MB/s, 46 days of nonstop recording would exhaust the entire 600 TBW of a 1 TB consumer SSD. And for fun: a high-end 500 GB SSD writing at 3500 MB/s could burn its 300 TBW in a single day, not counting the EoL slowdown.

Curiously, RAID 0 hard disks are perfect for such workloads, yet the blogger has still chosen to use consumer grade SSDs.


This is a phenomenal way to wear out an SSD for no real reason at all.


> An SSD can handle even uncompressed 24bpp 1080p60, which is 375 MB/s.

Well, it can for an hour.


You're using 4:4:4 (disabled chroma subsampling) to keep text readable?


If you have an old machine you can use as a NAS and run raidz2, disks are $7.50/TB or less: https://diskprices.com/

Screen captures also compress much better than live action since most frames are duplicates of their predecessor. So a cheap NAS can run for years before you start thinking of deleting VODs.


No need to record 4k 120fps videos if you're doing web development, something like 1080p in 10fps might be enough and it won't take ridiculous amount of space.


Parent comment specifically mentions recording for game development so 1080p at 10fps probably isn't going to cut it.


Update it to be 20~25 fps then, still won't take ridiculous amount of space.


Unlike movies, the content on screen recordings usually remains roughly the same for minutes before switching scene. So it can achieve significantly higher compression ratio even at high fps


I wrote more about it here: https://news.ycombinator.com/item?id=32223240

But the TLDR, ffmpeg can remove duplicate frames:

    ffmpeg -i in.mkv -map 0:v -vf mpdecimate,setpts=N/FRAME_RATE/TB out.mp4


1080p may easily mean that the code is unreadable if you have a decent size screen.


I've been streaming some of my side projects on Twitch, with OBS.

During the stream, I keep up a fairly constant spoken description of what I'm doing, what I'm thinking, what problem I'm stuck on, etc.

I've noticed I've also been speaking my thoughts out loud when programming, but not streaming. It ends up being a continuous "rubber duck" conversation, and feels (completely subjectively) like it helps me develop easier/better.


Yeah that's neat! The part about reviewing the video. Back in the day we would use VHS to record the game and capture rare glitches to review. Nowadays our QA runs with OBS always on and can attach clips to bugs. It would be cool if every dev had it too.


I've found this too... at work I regularly make videos of what I'm doing for other people. Then I realized how much stuff comes up in the videos because I'm being attentive, and started making videos I never share. Like it splits my attention to both act and watch myself acting


I've started doing this more. The videos of my work has always outlasted the work itself. Even though technically I can dig up old compilers or try and update my dependencies, I rarely do. So whatever was captured in the video becomes the only artifact of my older programming projects.

I used to think that video formats would no longer be supported over time, but even the oldest weird video formats still play in VLC and MPC, and probably would work fine if uploaded on YouTube.


I did the same for a while. It's a neat productivity hack. There are a few services that market accountability by hooking you up with strangers. Both of you must have your webcams enabled, and you just work on whatever you need to do for a set period of time without talking.


I used one such service. It was such a drag. Hated it. Does anyone use it repeatedly with different strangers?

I think I’d prefer live streaming to keep myself accountable to one-on-one sessions with both cams on.


How do you setup OBS to keep the recording sizes tolerable?

I've used this before on engineering grade machines, but it doesn't do so well on "everything is in the cloud so you can use a word processor quality" laptop, any advice?


I have a 4k display but record to 1080p, bit blurry but I'm not really using it to find small copy of text, but to see general state of things. Recording in 20fps as well.

I have a small shellscript that takes all video files recorded by OBS, and runs them through this ffmpeg command:

    ffmpeg -i in.mkv -map 0:v -vf mpdecimate,setpts=N/FRAME_RATE/TB out.mp4
Using mpdecimate removes duplicate frames, so if nothing is happening on your screen (although smaller changes gets ignored, like my clock showing the seconds), it removes those duplicate frames.

So one ~1 minute video of you thinking for 40 seconds can get reduced to 20 seconds. Not uncommon for some of my video files to go from multi-GB to just ~100 MB when removing all the pauses.


Very nice tip, mainly the part about when something happens you can rewind it.

What about hard disk space? How you handle it?


The space requirements can be very low capturing something like writing code, where only 1% of the screen might change second-to-second.

"ffmpeg -f gdigrab -i desktop -c:v libx264 -preset medium -fps_mode vfr -crf 0 -an -vf mpdecimate capture.ts" (Windows, -f x11grab -i $DISPLAY for Linux on X11) produces a lossless video that averages 102 MiB/hour for me (1920x1080 @ <= 60fps). That's about two cents a day at current disk prices, and easy to upload as a private YouTube video if you don't want to lug the files around.

If you don't like the CPU hit, use -preset ultrafast to record the capture using less processing power but giving a larger file, and then re-encode that file later using -preset slower. There's no quality loss if you used lossless mode (-crf 0), and for content like this the savings are especially large (reducing to around one-quarter ultrafast size, in my experience).


Wow, I did not know that. Awesome!


That’s an interesting idea. I might try this myself


Hmm, that's actually a very clever hack.

I'll try it!


why don't you just stream on twitch


Because of sharing internal information from the screen? This would be. My first thought.


BTW do you think twitch is best for this? I'm working with js for the first time and tools seem really slow and poor, I'm sure I'm doing something wrong, I figure looking at others would be great.


I would do that but my workplace doesn’t allow it ;(




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: