You seem to be very confident that you know exactly how the law is going to be interpreted in this case. If I were you I'd moderate that confidence a bit to cover the possibility that you are not in fact the top legal scholar you seem to think you are.
I could see this go either way. There's the argument you put forth, and then there's the argument that a text to image model is a transformative work. You can use copyrighted works and make money off your product and still have a transformative work. The Google books case is, of course, good reading on the subject.
My main point is that it is not at all clear which way the law will go on this.
When AI spits out verbatim the works of an author which it is trained on, that is not transformational, that is plagiarism. Whether it is done by an AI or a human is immaterial. Now we can argue over hundreds of hypothetical situations where AI does produce transformational work that isn’t a derivative, and we would agree that AI can and often does produce transformational work, but that is not what we are arguing over today. We are talking about the cases where AI isn’t transformational and where it is producing derivative works. To claim that AIs never produce derivative works or that all output from an AI is only transformative is foolish when we have countless counter examples popping up every other day.
I could see this go either way. There's the argument you put forth, and then there's the argument that a text to image model is a transformative work. You can use copyrighted works and make money off your product and still have a transformative work. The Google books case is, of course, good reading on the subject.
My main point is that it is not at all clear which way the law will go on this.