It's nothing new - even without LLMs you have automated tools that will try stuff to see if your application is vulnerable. You can abuse misconfigured nginx server. To be fair, to your point, LLMs are amazing pattern recognizers = this pattern it has seen in this codebase applies to that codebase so vulnerability is most likely; I'm unsure if they can "innovate" (still, recognizing patterns is enough); this pattern it has seen causes a crash, but I don't know if we're at the point where it can connect two and two together and use a set of unrelated code issues to, for example, exfiltrate credentials
Because LLM saying "I got confused, dropped the database and then got scared and hid this from you" hides the "why" LLMs do the things they do. I would also prefer if they were less sycophantic and argue with what I'm wanting to do rather than treating user as a god (ie - "the algorithm you're trying to use is less performant than an alternative")
These were my thoughts as well and it's nothing new, I think, regarding testing altogether:
- testing libraries (and in this case - language itself) can have bugs
- what is not covered by tests can have bugs
Additionally would add that tests verify the assumptions of coder, not expectations of the business.
To give benefit to the author - I'd read the article as: having tests for given thing ensures that it does the thing that you built the tests for. This doesn't mean that your application is free of bugs (unless you have 100% coverage, can control entire state of the system, etc) nor that it does the right thing (or that it does the thing the right way)
I like to differentiate between coverage by lines and semantic coverage: sometimes you need to exercise a single line multiple times to get full semantic coverage, and better semantic coverage usually beats larger line coverage for detecting problems and preventing regressions.
Yes, mutation testing and similar techniques like fuzzing can help with it, but sometimes you want to be more deterministic: there are usually a lot of hidden side effects that are not obvious, and probably a source of majority of software bugs today.
Eg. something as simple as
function foo(url) {
data = fetch(url);
return data;
}
has a bunch of exceptions that can happen in fetch() not covered with your tests, yet you can get 100% line coverage with a single deterministic happy path test.
Basically, any non-functional side-effect behavior is a source of semantic "hiding" when measuring test coverage by lines being executed, and there is usually a lot of it (logging is another common source of side-effects not tested well). Some languages can handle this better with their typing approaches, but ultimately, there will be things that can behave differently depending on external circumstances (even bit flips, OOMs, full disk...) that were not planned for and do not flow from the code itself.
> Promise itself has no first-class protocol for cancellation, but you may be able to directly cancel the underlying asynchronous operation, typically using AbortController.
Agree. "Owning" in this context should mean: understanding the domain, working on new capabilities and handling fallout if anything goes wrong. Whether AI or human ownership transfer this ends with the new owner just handling new work, while the other two remain with previous owner (who might emotionally provide support for it due to attachment of "I've built it")
Agree - domain experts lack the expertise on how things should be built. Developers lack the expertise of what should be built. In each case, one can get into the role of the other, per what you say: "humans who do the translation between those who deeply understand a system and software". LLMs will extrapolate for both (whether it's good or bad)
The only thing missing from the system is the AI GOV that defines the specification of work. Once that is commonplace developers become ephemeral as the code to support the hardened GOV. That is what CANONIC.org is.
What a strange take - you dismiss valid criticisms of Spotify product, just to venture off into the land of "well you can create a mac app with one sentence" as if that would matter here.
I worked for 7 years in a place where my technical insight slowly turned into questioning my decisions and expertise (this was after being 3 years in tech lead and 2 years in staff engineer role). Sometimes the solution is just to walk away
I think when you are new with good ideas, you are judged against average. If you are above average, you are listened to.
As years pass, you are judged against the standard you set, and if you do not keep raising this standard, you start being seen as average, even if you are performing the same when you joined.
I've seen this play out many, many times.
When an incompetent person is hired, even if issues are acknowledged, if they somehow stay, the expectations from them will be set to their level. The feedback will stop as if you complain about same issues or same person's work every time, people will start seeing this as a you problem. Everyone quietly avoids this, so the person stays.
When a competent person is hired, it plays out the same. After 3/5/10 years, you are getting the same recognition and rewards as the incompetent person as long as you both maintain your competency.
However, I've seen (very few) people who consistently raised their own standards and improved their impact and they've climbed quickly.
I've seen people lowering their own standards and they were quickly flagged as under-performers, even if their reduced impact was still above average.
I agree with this summary to a degree. Additional problem arises when you simply cannot raise the standard as you lack political influence to do so. As it is said in the article - sometimes companies are comfortable with status quo, irregardless of the problems, whether they are technical or not. Another issue stems when product, rather than looking at tech as a partner in pursuit of common goal starts to see it as an underling.
While I can't say that I observe that kind of radical shift for myself, one of the reasons I still can see something similar is AI development.
Basically manager asks me something and asks AI something.
I'm not always using so-called "common wisdom". I might decide to use library of framework that AI won't suggest. I might use technology that AI considers too old.
For example I suggested to write small Windows helper program with C, because it needs access to WinAPI; I know C very well; and we need to support old Windows versions back to Vista at least, preferably back to Windows XP. However AI suggest using Rust, because Rust is, well, today's hotness. It doesn't really care that I know very little of Rust, it doesn't really care that I would need to jump through certain hoops to build Rust on old Windows (if it's ever possible).
So in the end I suggest to use something that I can build and I have confidence in. AI suggests something that most internet texts written by passionate developers talk about.
But manager probably have doubts in me, because I'm not world-level trillion-dollar-worth celebrity, I'm just some grumpy old developer, so he might question my expertise using AI.
You mention the tradeoffs between rust. Including the high level of uncertainty and increased lead time as you need to learn the language.
The manager, now having that information, can insist on using rust, and you get er great opportunity to learn rust. Now being totally off the hook, even if the project fails, as you mentioned the risks.
“Truly I tell you,” he continued, “no prophet is accepted in his hometown."
- Luke 4:24
It's why people often trust consultants over the people inside the organization. It's why people often want to elect new leaders even if the current leaders are doing a decent job.
The baby almost always gets thrown out with the bath water.
I find this hilarious given that I've experienced it from both viewpoints - 1. consultant implemented their half baked solution that continued to bite us for my tenure and imo was completely unmaintainable; how were they able to convince leadership about their ideas - sometimes it's just snake oil 2. In new place am preaching certain things to people that do listen and seem to want to do it - it makes me a bit uncomfortable and to a degree scary in how easily you can find acolytes. They do validate my suggestions, ask questions and most importantly - think, so I am hopeful that I won't turn out to be a false prophet
I've also played both roles myself at times. I've been the wise consultant. And I've been the Cassandra that nobody would listen to. My wisdom was never as good as presumed when I was the consultant. And my wisdom was far better than was assumed when I as the Cassandra.
The prevalent pattern I can see is making things mundane. Capabilities that you are enabling are no longer something that only you could do, was you expertise there at all? Things running smoothly is something that is granted. Doing your job well becomes unexceptional
reply