Hacker Newsnew | past | comments | ask | show | jobs | submit | kennu's commentslogin

It means you take responsibility of maintaining the server forever, i.e. dealing with TLS certificates, SSH keys, security updates, OS/package updates, monitoring, reboots when stuck, redeploy when VPS retired, etc. Usually things work fine for a year or two and then stuff starts to get old and need attention and eat your time.


As someone who runs a such a VPS this is all a non-issue. Running HTTP service is so trivial that once I set it up I don’t even spend an hour in a year maintaining it. Especially with Caddy which takes care of all the certs for you.

And this is also bearing in mind that I complicate my setup a bit by running the different sites in docker containers with Caddy acting as a proxy.

With storage volumes for data and a few Bash scripts the whole server becomes throw-away that can be rebuilt in minutes if I really need to go there.

And for sure any difficulty and ops overhead pales in comparison to having to manage tooling and dependencies for a typical simple JS web-app. :)


Oh no! Issuing SSL certificates! The horror!

I really doubt that people who can’t install an ssh key should be able to practice software engineering. Sometimes, I think that software engineering should be a protected profession like other types of engineering. At least it will filter out the people who can’t keep their OS up to date.


This is not about how easy or difficult it is to issue TLS certificates, to configure SSH keys or to update the OS. It's about having to actively maintain them yourself in every possible situation until eternity, like when TLS versions are deprecated, SSH key algorithms are quantum-hacked, backward-incompatible new OS LTS versions are released, and so on. You will always have new stuff come up that you need to take care of.


This is all trivial, and can and should be automated. Furthermore, all of your arguments can easily be applied to NodeJS version deprecations, React realizing they shipped a massive CVE, etc.

I will die on this hill: parent is correct - the ability to manage a Linux server should be a requirement to work in the industry, even if it has fuck-all to do with your job. It proves some basic level of competence and knowledge about the thing that is running your code.


I'm curious about this trivial automation. Let's say the new OS LTS version no longer includes nginx, because it was replaced by a new product with different config. How does the automation figure out what the new server package is and migrate your old Nginx config to the new format?

I agree with Node.js version deprecations being a huge problem and personally advocate for an evergreen WebAssembly platform for running apps. Apps should run forever even if the underlying platform completely changes, and only require updating if the app itself contains something that needs updating.


The answer is to write your server in portable C++, and just rebuild it for whatever new OS you're dealing with.

The speed. Imagine the performance. There are plenty of mature C++ web server frameworks, it's really not difficult. If you're afraid of C++, you could choose something else. Rust if you're insane, or golang if you're insane but in a different way.

Anyway. Nginx is not going away, so the argument is a bit silly. "What if js went away". Same thing.


If an LTS of an OS replaced nginx with something else, a. it would be announced with great fanfare months in advance b. if you don’t want to do that, add apt / yum / zypper install nginx to your Ansible task, or whatever you’re using.


The things that you just described are not automation, but human activities needed to tackle the new situation by following news and creating new automation. Which kind of proves my point that you cannot prepare for every unexpected situation before it actually happens. Except maybe with AI in the future.


When AWS announces that they’re EOL’ing the Python or NodeJS version in your Lambda, or the version of your RDS cluster, etc. you also are required to take human action. And in fact, at any appreciable scale, you likely want that behavior, so you can control the date and time of the switch, because “zero downtime” is rarely zero downtime.


Yes, and like I mentioned in another comment, I consider this a major painpoint and problem with Node.js based applications. I have high hopes that eventually there will be an "evergreen" WebAssembly based Lambda function runtime.


I keep reading posts like this, but the people who say this never actually seem to enlighten the rest of us troglodytes by, say, writing a comprehensive, all inclusive, guide to doing this.

If it's so easy, surely it's no big undertaking to explain how one self hosts a fully secured server. No shortcuts, no "just use the usual setup" (we don't know what it is!), no skipped or missed bits. Debian to Caddy to Postgres, performant and fully secure, self upgrading and automated, from zero to hero, documenting every command used and the rationale for it (so that we may learn).

Or is it perhaps not as simple as you say?


The parent I responded to was discussing issuing certs, configuring SSH keys, and updating an OS. Those are all in fact trivial and easily automated.

What you have stated requires more knowledge (especially Postgres). You’re not going to get it from a blog post, and will need to read actual source docs and man pages.


The original claim was "People shouldn't even be in the industry unless they can administer a Linux server, even if that has nothing to do with their role." It is a very significant moving of the goalposts to now suggest this is all about "updating an OS". That's not a good faith claim.

This whole thing is merely cheap online snark masquerading as wisdom. No, not all SWEs know how to maintain Linux servers, and many (most?) SWE roles have all of zero overlap with that kind of work. If businesses could fire all their expensive server admins and replace them with some college kid and a $5 VPS, they would long since have done so.

If this is anything more than poseur snark, put your money where your mouth is and either write a comprehensive resource yourself, or at least compile a list of resources that would suffice for someone to be able to securely run and maintain a live server in production. No, not Hello Worlds, actual prod. Then, when next this comes up, link us to your guide rather than just spraying spittle on the plebs who lack your expertise.

Do something more constructive than low effort snark.


I intermingled the two claims, you’re correct, and was not intending to move the goalpost. I apologize.

Claim one: setting up unattended-upgrades, SSH keygen, and automating cert rotation is trivial and easily automated.

Claim two: you should know how to manage a Linux server. Here are docs.

https://tldp.org/

https://www.man7.org/linux/man-pages/dir_all_by_section.html

https://nginx.org/en/docs/

https://www.postgresql.org/docs/current/index.html


They don't write the guide because by the time they've written the guide to an appropriate level of specification, the result they've produced is an off-the-shelf service provider not unlike the ones they're railing against.


I self host my own server and this isn't something that takes much time per year. You're making it sound like a day job. It's not really. As long as you have a solid initial config you shouldn't have to worry.


Exactly. Also, being that my specialty is writing software and not server maintenance, no matter how much of an effort I put forth there's substantial risk of blind spots where holes can lurk.

I felt more comfortable maintaining a VPS back between 2005 and 2015, but at that point attackers were dramatically less sophisticated and numerous and I was a lot more overconfident/naive. At least for solo operations I'm now inclined to use a PaaS… the exception to that is if said operation is my full time job (giving me ample time to make sure all bases are covered for keeping the VPS secure) or it's grown enough that I can justify hiring somebody to tend to it.


Caddy server even does ssl for you automatically.


Caddy runs on top of Go's excellent acme library that handles all of the cert acquisition and renewal process automatically.

I get that if you get a problem then it'll take a bit of work to fix, but all of this seems like a lot less work than dealing with support for a platform you don't control.


Time is a precious (and really expensive for SWEs) resource, why should one spend it on updating certs and instances?


They shouldn't, that's why self hosted PaaS already do it for you, it's not a differential reason to use cloud services instead just because they do it for you too.


You don’t, you automate it. This has been a solved problem for literally years.


Now you have to maintain the automation. There is nothing wrong with that. There is nothing wrong with building your own server. There is nothing wrong with colocation. There is nothing wrong with driving to the colo to investigate an outage. There is nothing wrong with licensing arm and having TSMC fab your chip. There is nothing wrong with choosing which level of abstraction you prefer!


certbot and ssh keys are things you set up once

I haven't rebooted my DO droplets in something like 5 years. I don't monitor anything. None of them have been "retired".


This is the kind of stuff a software develop should have absolutely no problem managing. It's crazy to me that so many software developers hate the idea of maintaing a computer.


just ask claude to do all that :), he is excellent and installing & managing new servers and making sure all security patches are updated. Just be careful if its a high risk project.


You clearly haven't tried doing that in quite a long while.

Using SSH keys + fail2ban means that for a simple static site, it will be sufficient for a decade at least.

TLS certificates get auto-renewed with letsencrypt every 3 months via certbot.

Installing security updates depends heavily on what is your threat model, if you're just displaying some static content you fully own, you'll be usually fine.

Literally never seen a VPS being "retired", if it happened to you, change provider.

I've got a bunch of VPS running for 10+ years, I never need to touch them anymore.

My homelab has been going strong for the past 8 years. I did have to do some upgrade/maintenance work to go from being an old laptop without screen to a minitower low power machine, and when I added 30TB of storage. Other than that, it's running smoothly, it also uses TLS and all the rest.


vs. trusting someone else to do all that for you, and do you then verify that it gets done properly?


When buying the infrastructure as a managed cloud service, yes, I trust that they've got people handling it better than I could myself. The value proposition is that I don't even see the underlying infrastructure below a certain level, and they take care of it.


This is extremely easy with tools like dokploy tho... I use dokploy locally to manage all my VPSs + home server. Truly good stuff and I don't believe your quip at the end, it feels like poisoning the open source waters for consolidated anti democratic cloud platforms.

It's way way way way easier managing a basic VPS that can be highly performant for your needs. If this was 2010, I'd agree with you but tooling and practices have gotten so much better over the last decade (especially the last 5 years).


Maybe you're right - I've never tried dokploy, but from documentation it sounds like mostly a deployment, monitoring and alerting tool. For me the problem has always been that once you get the alert (or something just stops working), a human needs to react to it and make things work again. In cloud services you mostly pay for them providing the human, and in self-hosting you're the human.

I can see though that today's AI models could eventually replace the human in the loop and truly automatically fix every possible situation.


I must be using the wrong cloud services. Whenever a part of our app goes down someone on the team still needs to respond to it.


You might be right. I've been mostly using serverless / managed cloud services such as AWS Lambda, API Gateway, S3, DynamoDB for the past 10+ years. When I've needed to respond, it's been because I myself deployed a bad update and needed to roll it back, or a third party integration broke. The cloud platform itself has been very stable, and during the couple of bigger incidents that have happened, I've just waited for AWS to fix it and for things to start working again.


you actually need new ops teammates, not new cloud services :)


yeah i've had more downtime on managed db's & cloud servers then on my own managed VPS. And if it happens, with VPS i can normally fix it instantly compared to waiting 20-60 min for a response, just to let you know they start fixing it. And when they fix it, it doesnt always mean your instance automatically works.


Agreed, Dokploy is great, not sure why you got downvoted for the suggestion.


IDK, I only found out about Dokploy six months ago. The tools nowadays for managing small hosted solutions is absolutely amazing. You can do a lot with a single VPS if you avoid bloated software choices.

People often forget there is a massive economy out there for niche solutions and if you're a small team you don't exactly need a large slice to make a nice life for yourself.


Sad to see it go. The philosophy of CDK has been to offer a shared ecosystem between IaC, backend code and frontend code, allowing to share configuration, data structures and libraries between all of them. It has made development more unified and have less redundancy and manual work. Personally I don't want to repeat some stuff in a special Terraform language, if I can find a way to manage the whole application in TypeScript.


Pulumi


Thanks, will definitely look into it. I first used Pulumi when it was just a cloud platform but seems it is a more general devops tool now.


I feel the opposite about SQL: It is often being shoehorned into use cases that don't fit the relative/transactional database model at all. My own default database is AWS DynamoDB, because it fits 90% of my own use cases quite well and offers a fast approach for iterative development. Recently I've been evaluating how to find the same level of abstraction in open source databases, and MongoDB feels like the closest match. Postgres with JSONB comes second, but manipulating JSON with SQL is not very comfortable and tends to result in subtle problems e.g. when something is NULL.


Scrolling with mouse scroll wheel a few hundred thousand kilometers at a time is so much work that I gave up :-(


I'm thankful that the view-source:https://joshworth.com/dev/pixelspace/pixelspace_solarsystem.... allows one to see the annotations since clicking on the planet jumps scrolls past them. My gratitude for not baking such things into 8MB of JS

Also thanks to the view-source I learned that it offers different units, including busses, Great Wall of China, etc


Repetitive strain injury any% speedrun


Click on the planet symbols at the top to fast track.


Its quite cool on the phone


At minimum, the government gets a "ping" when identified citizens visit adult sites requiring the age check, so they can keep a record. In worse scenarios, maybe some identifier leaks through that can also identify which site they visited. And of course, the identification apps can be hacked through supply chain attacks etc.


Without knowing the specifics, this is not necessarily the case. It could be implemented without needing to ping "the government". As a strawman idea, there could be a monthly refreshed distributed database of booleans per citizen identity and accessed through a keyed hash.


There is a very possible attack. Open a porn website, buy ad traffic in France, once users are here, claim identity needs to be verified. In the background, start the process to open a bank account in one of these online banks and act as a relay in the verification process.


Is that an actual thread model, and or are you just making stuff up?

I'm asking because even oauth would make this kind of attack vector impossible, as the referrer and redirect urls are verified - and I sincerely doubt they're so incompetent not to do something similar in such a context.


It is a relay attack.

There are a lot of verification platforms, so the idea is that the user is asked to be verified and that his proof of identity is reused in live for something else. In the addressbar, user sees "dangerousporn.com" -> "safeidentify.com"

The operator of "dangerousporn.com" starts (manually) an application to a [bank account / crypto exchange "bank.com"], using a fixed residential proxy (Luminati / Oxylabs, etc).

Once a victim arrives on "safeidentify.com", the user that is on "safeidentify.com" is asked to follow the actions that "bank.com" is asking to do (upload your ID, turn head left, turn head right, up, down).

"safeidentify.com" plays back the recorded video on the KYC platform of "bank.com" using an emulated Webcam.

Difficult ? Yes and no, but manually doable on a case-by-case basis, and you don't need thousands of victims as it is really worth.


to begin with, youve already switched the hacker from an advertiser to the operator running the website.

but ignoring that: none of what youve written there has been enabled by an identity provider hosted by the state. These scams already exists, today and various "special" users fall victim to them.

but lets ignore that too: these verifications are usually done interactively and cannot simply be played back, as you need to actually react to the actions of the person verifying your identiy

but lets ignore that too: its _highly_ unlikely the service will make users upload IDs and get verified via video etc on every connection. I'm gonna bet this is a one-time action, and after that you'll probably have to simply authenticate via 2-3 factors (username, password, biometric, sms, email, e-pass, certificate etc) - so what you're insinuating (this service makes people numb to such situation) is implausible. Especially in the context this scenario is in: merely verifying >18 yo


> you need to actually react to the actions of the person verifying your identiy

yes it's exactly the point, use porn websites as a hook to convince the user to do your actions to verify their "identity"


No, that would defeat the entire point, and any such system should be fought indeed. It's possible to build systems that explicitly do not have this property.


I don't really know what to do with a dumbphone, since I don't get any phone calls or text messages any more. Everything goes through apps, email or web nowadays.


I understand the sentiment, but I don't get how you could draw more complex software plans by hand. I usually use Draw.io/Diagrams.net, and the drawings get pretty large and need reorganizing dozens of boxes several times while planning the architecture.

OTOH if the plan is very simple and obvious, and can be drawn out in one go, it doesn't really need a diagram in the first place, so I skip spending time drawing the obvious stuff.


OP here.

I don't often do very complex software plans like that. My working notes are often on a smaller scale like individual features or so. If we need to document the full architecture for the project, I'm happy to do that with digital tools.

But while I'm planning parts of it or designing it, I do better with pen and paper. My main issue with many of the digital tools I've tried comes down to the added friction if I need to switch to a different tool in the app when I switch between circles and rectangles and text and the fact that I find free-hand drawing with mouse really difficult.

> OTOH if the plan is very simple and obvious, and can be drawn out in one go, it doesn't really need a diagram in the first place, so I skip spending time drawing the obvious stuff.

I think there's a middle ground where it might be easy to draw on one go but deciding what to draw and how things work together and what's needed requires iterations and for that, thinking through drawing and writing helps me a ton.


I guess there's many cases where you don't really know how complicated or simple the solution will end up to be, and start drawing it while thinking about it.. I must admit that those are usually the most interesting parts of the work.


I actually do all this stuff in my head and use hierarchies of bullet points in a text file to externalize some stuff. Some of these may end in arrows that point to a different process.

I never use paper because I'm always moving these bullet points around and inserting stuff between them. Apps are too slow.

I never write down all the information because these notes are enough for me to reload everything. It's pretty easy to see that I didn't write something when there's a gap in my notes. I never wrote it down because I'm going to come up with the same or better solution quickly.

This isn't really helpful for anyone else and doesn't work well with pair programming.


My main issue in the EU is that cloud platform services are not very mature compared to AWS, Azure, GCP. They have some of the basic stuff like VMs and storage, but almost nobody has FaaS and the smaller services like SQS, SNS, scalable pay-per-request database like DynamoDB, etc. I hope these things become available so that it becomes possible to build scalable serverless apps here. Ultimately these services should be standardized like S3 did for storage.


Cursor is not about vibe coding. Vibe coding means you don't care about the AI's code output as long as it works. Cursor is all about efficiently reviewing the AI-proposed changes and hitting Tab only when you approve them. Much of the editing process is hitting Esc because the proposed change is not good.


I know this is a meta point but I'm pretty sure vibe coding is just an X meme that means whatever the poster intends. I'm not sure you can say vibe coding does or doesn't care about relative quality


Yeah, I'm afraid "vibe coding" is a term that quickly lost its meaning because everyone was using it to mean different things.

Some people use it to mean using AI for writing code in general. I've preferred for it to mean when someone who doesn't know how to code uses AI to write code and doesn't understand the output.


Almost, but not quite. As per Karpathy's definition [0], it's not about not knowing to code (he obviously does), but rather not caring - "fully give in to the vibes" and "forget that the code even exists". So the closest implementation to this ideal would probably be something like lovable.dev, that fully hides the code from you, because if you can't resist the need to look at the code, you're not fully "vibing".

[0] https://x.com/karpathy/status/1886192184808149383


Somehow, to me that's even worse.



Agreed, that's how you'll have much more success using it. Basically, I ask it to write 4-10 lines at a time, if the lines are too many for me to comfortably review, I reject the change and ask more specifically.


there is essentially no difference if you give the agent total control in cursor, you can code entirely via prompt without ever touching the code after you create a workspace.

that is to say I can't think of any greater support of vibe-coding , you can open up a chat prompt and have at it.


It's funny how back in the 1990s the concept of software was different. You might buy an actual shrink wrapped package with an install disc and be happy with it for years. Nowadays it would be unthinkable to use software without getting regular updates (at least security updates) and always being able to install the latest version.


Isn't that partially because even your stove is now connected to the Internet? The attack surface changed from "when I connect my USR modem" to "someone can portscan all of IPv4 in reasonable time"

I do gravely miss the ability to actually have the bits, and will take any steps I can to grab an offline installer if offered


I got my first fixed IP address and always-on Internet connection in 1995 and I don't particularly miss the dial-up times before that. I prefer to have everything connected and online all the time, but also with proper security.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: