Someone recently suggested (I think they read it in a recent industry paper ) that prompting is actually less effective when using the deep think frontier models. Does anyone have data on this or has a similar understanding?
submitted by /u/Fluffy_Ad7392
[comments]
Source link