I am happy that so many people get to experience the feeling me and other software engineers feel when we write code. OpenAI democratize some of this feeling and now people that never knew how to code can experience the rush of creating something new using code.
In a world where a lot of AI Researchers that develop these amazing models are slowed down by their lack of "normal" engineering skills I think it's very apparent that software engineering goes well beyond writing a snippet of code.
Like the internet, stackoverflow, forums and other advances in technology that opened new doors and opportunity and democratized the industry - this is another one, a big one! the smart ones will use this push as an entry point to the software development world rather than looking for an oracle function for software development (which does not exist yet)
> the smart ones will use this push as an entry point to the software development world rather than looking for an oracle function for software development (which does not exist yet)
Very much agree. I think mentorship becomes even more important now. I have a friends that's switching careers and starting to learn software engineering. Just for fun, I back-seated him while he tried to build a really simple app using GPT-4.
Even with really fine-tuned prompts, some of which I helped with, the code that was generated was either barely readable, had performance issues, and/or in some cases was just completely broken. At the time, I made the quip that for junior/entry-level folks it's like getting cybernetic implants in your legs before you know how to walk or run.
People who use ChatGPT to spit out code as a method of learning software engineering are going to be seriously deficient in fundamentals like performance.
I've already worked with way too many engineers who copy/paste code without understanding what it does.
I've seen similar results with teams I work with. The strongest engineers who can spot the deficiencies or errors in LLM generated code generally get a huge productivity boost, as they can compare the "shape" of the output to what they would expect for a given problem.
The junior devs, lacking the experience that provides the context or knowledge to analyze the "shape", let all manner of minor and major issues from the generated code slip past.
I really notice this particularly with say less popular frameworks or code libraries, where many LLM code generators appear much more likely to hallucinate invalid code. I see similar issues with DSLs too, where the LLM starts to just guess at the DSL syntax. I really hate when an engineer new to a framework just goes to the LLM first, without at least spending a little time learning the framework for the task.
All said, I suspect these code generation tools will get good enough that the rate of errors goes down enough it maybe won't matter in the longer term.
It may turn out that ChatGPT results in a new generation of developers who produce terrible code, full of flaws, with no understanding of how it actually works.
Or... it might be that ChatGPT helps newcomers get over that horrendous initial learning curve, start by writing bad code... and then get better at it.
I'm currently cautiously optimistic that the latter scenario is going to win out.
In my experience newcomers and junior engineers don't get better by writing more bad code. They learn mostly by 3 methods. 1) finding flaws in their code the hard way (slow) 2) getting careful feedback from more experienced people (has to be done well) and 3) reading better code for a problem they already understand.
I'm not sure GPT helps with any of those yet, as it seems to be more likely to spit out incorrect-to-sortof-okayish so far.
I'm on year two of my journey to become a game developer, i am focusing on programming and if i realize i don´t want to work with games in the future at least i have solid knowledge on how to deal with a broad area of software problems.
I don´t rely on ChatGPT to write any code for me but i do use it whenever there´s a concept i don´t understand and if there´s any problem blocking me from getting forward in my many programming books. Having a tireless teacher/mentor with extensive knowledge and who does not care however stupid question you ask is a very good tool for learning.
ChatGPT could also empower individuals who may not have considered learning programming to create practical programs that enhance their daily routines, similar to how VB6 significantly reduced the barrier to entry for developing various applications.
I don't think it will help people learn as much as having to write everything yourself. Sure it will get them productive 10x faster, as a shortcut. You can only get so good at riding a bicycle by watching others ride a bike. Remember that caching issue that cost you hours of debugging? You don't forget that. Maybe the generated code doesn't produce a bug this time, but the 5th or 6th time? It's hard enough to debug code you wrote yourself sometimes, let alone generated code.
I don't think this is good news, it's at best not-bad news. We're already inundated with content, most of it copycatting other content, and it's replacing meaningful things, like archetypal stories and philosophy. Look at how people describe Marvel movies and Star Wars shows - it's content, not stories. Scaling the ability to just repeat things without an understanding of those things will at best just add more content. It will likely contribute to the further creation of complex systems people don't understand and can't control.
"Um, I'll tell you the problem with the scientific power that you're, that you're using here. It didn't require any discipline to attain it. Ya know, you read what others had done, and you, and you took took the next step. You didn't earn the knowledge for yourselves, so you don't take any responsibility for it. You stood on the shoulders of geniuses, uh, to accomplish something as fast as you could..." (Emphasis mine).
But the awesome part is that you can ask it to explain parts and it usually does pretty well.
I was using it for debugging WINE - where I have no DirectDraw4 / DirectX6 experience, and it's much nicer than trying to trawl through ancient MS documentation myself.
I don't like the use of the word "democratize" in this context, there's no secret authority of programmers preventing people from learning. You need only a computer, which is practically a commodity in first world countries; the users really have all the power and resources they could ever possibly need. I don't see how it could get more democratic at all.
The reason people can't code isn't because an external force is preventing them, it's because they aren't putting the work in.
I don't know how to paint, is the solution to "democratize" painting by waiting until an AI can do it for me?
I use GPT-4 for helping with my code all the time and I still get that feeling, because once the code base is sufficiently complex there's no avoiding the logic puzzles.
Maybe one day it can look at your entire code base and just fix it / writea new feature, but it's definitely not there yet.
The models AI researchers develop are cutting edge, and mathematically involved; the kind of thing I would expect an LLM to fare poorly at. And today's researchers are capable of implementing their models in pytorch and the like.
First of all, good luck and the implementation looks pretty slick.
I relate to the other comments that (1) Explosion of AI generated blogs / copy etc is not what we need and it's hard to see the value of it long term (2) this looks like a simple usage on top of GPT4, no real IP / innovation - this is risky from a business model perspective.
I'm _extremely_ surprised to see all of the YC companies that are essentially just layers on OpenAI.
I think it's safe to assume more models will be introduced and vendor lock-in can be avoided, but I find it hard to believe some of the "simpler" ideas can create compelling, VC-scale businesses.
I think it’s both correct and realistic to assume that most ideas won’t reach venture scale size. After all, the vast majority of startups fail. However I’d like to point out that any software product that is built is per definition a layer on top of something else and it will always start small. The key question is whether that “layer” is useful and if it can keep getting increasingly useful with time.
While integrating with OpenAI is just an API call away I don't think it takes away from the user experience. Sure its less of a moat but plenty of products have won by having the best user experience and not many new features (Notion, Digital Ocean, etc.)
Not every experience is going to be best suited to fit in a chat box like ChatGPT which opens the door for startups like this one to build something new.
I'm excited when I see products like this, and I think we really need to retire the critique of "Its just an API call to OpenAI". While yes that is a core part of it, there is a lot of time and effort that goes into developing these experiences that has value
I suppose it's inevitable that the first uses are going to be the trivial ones. They'll be first to market.
I'm sure they imagine that when they figure out something actually worth doing, they'll already have a user base and revenue stream and reputation. If there's somebody out there doing something more innovative, they'll either buy them out or reproduce their idea in-house.
Me, I'm skeptical that there's a "there" here. But that's why somebody else is getting rich and I'm not.
Why are you surprised? OpenAI is Sam Altman's (YC's last leader) pet project. Of course, YC is highly incentivized to produce more customers for OpenAI
Well there's this thing called friendship. You see, executives at VC firms and such are typically pretty friendly with each other, and view their fellow executive's success as potentially pulling them up the ladder. AI is the next big thing. Sam Altman and his company is in a nice spot to profit. Moreover, YC research is an investor in OpenAI.
Thus, because the executives want Altman to succeed (to presumably advance their own career) and because they're in charge of YC, which is an investor in openai, they have every incentive in funding startups that then use OpenAI as the main platform.
This hypothetical incentive structure seems tenuous at best and not the open and shut case it sounded like in your original comment. I’m sorry but I don’t buy it.
So the YC Partners are actively investing in worse companies that will degrade their returns so that OpenAI/Microsoft can add new customers that do almost no volume compared to ChatGPT or Bing?
There's more to businesses than simply the underlying tech. If technical superiority were the sole determinant of business success, the tech landscape would look a lot different today.
Rather, YC is banking that some of the use cases get people hooked, thus causing vendor lock in, to these new apps, which end up benefiting YC both by benefitting the companies they're funding as well as benefitting open ai.
I don't share the sentiment that companies that are 'just' layers over OpenAI are incapable of building moats.
Thanks! I do agree with #1, and I think we need to make that clearer in our messaging. For context, I'll re-share one my replies to a related comment:
> The thing that excites me most about generative AI isn't "more," it's better. I often use Type to write satire and now whenever I hit a block, I don't tab over to Twitter – I have Type generate some ideas. Often, I don't use them as-is but they do inspire a new angle I hadn't thought of.
One #2, I think there's some truth to that today. But our belief is that over time, these products will start to evolve into something more advanced and useful. A product like Type, for example, won't really look like Google Docs + AI in 3 years, it will start to feel like a more novel category of tool. We'll see, though!
That's not what vaporware is. A product that just released on a technology that has only been available for a few months isn't vaporware simply because it doesn't already achieve the full roadmap goals
In 3 years it will also come complete with fake images, diagrams, photos, sound bites and video clips.
We'll all be locked in to walled gardens (figuratively) since anything out on the wild/open/unverified internet will just be fake. This is what could rescue traditional media if they play their cards right. Providing genuine content by authenticated and verified-human authors.
I imagine the Luddites were themselves angry about fake work and fake souls. We did okay though.
Humans will cope [1]. We are resilient as fuck. We once had to fend off lions and bacterial infections and getting throttled in the night. Now we worry about lattes, stock prices, and political hullabaloo. We'll be fine.
[1] (Not just cope. I'm willing to bet that it'll be better than everything that came before.)
> this looks like a simple usage on top of GPT4, no real IP / innovation
You can generate the first draft in the chatGPT-4 window, then ask it to add a section or expand a list and it works pretty well in the chat interface. So it's not really hard to write articles with bare chatGPT UI.
To each their own :). I do think it's a big space and plenty of folks will find ChatGPT helpful enough to not warrant a more vertical / specific solution like ours.
Explosion of blogs and cheap copy was already rampant, but it's true that it is magnified. On the other hand, do you consider niche sort of content[0] a net negative, when these tools let fewer writers or writers with fewer hours create content for, perhaps untapped, niches? The newsletter linked below is not something I would have been able to do or justify doing before ChatGPT. Can niche usecases make up for the explosion of samey stuff?