From what I can see, GPT-4 is incredibly useful but you have to be very careful to lead it in the correct direction otherwise it produces bad outputs. I say this as somebody that probably uses it every 10 minutes. Do you think that it will stop making errors of judgment and be able to behave as an agent soon?
I've not encountered anything yet that gives me faith in that, other than hype about it increasing in ability exponentially which seems unlikely.
> I say this as somebody that probably uses it every 10 minutes
Do you feel like your ability to produce correct output with GPT improved with practice? If so, then eventually you might reach a point where you get 99.99% accuracy with it, and this can easily 10x your productivity.
No, that's not really what has happened. If you ask a question, it tends to mean you lack knowledge about something, and therefore you don't have enough context to produce a question that perfectly teases out the correct answer from GPT. Instead what happens is that you have to carefully engage in conversation and iterate towards improving GPTs context and your understanding of what it is saying well enough to ask it more incisive questions or to correct obvious inconsistencies in what it has communicated.
From what I can see, GPT-4 is incredibly useful but you have to be very careful to lead it in the correct direction otherwise it produces bad outputs. I say this as somebody that probably uses it every 10 minutes. Do you think that it will stop making errors of judgment and be able to behave as an agent soon?
I've not encountered anything yet that gives me faith in that, other than hype about it increasing in ability exponentially which seems unlikely.