Or how well standard procedures were translated into actionable instruction sets. This is what I found in practice to be working well. It's also how OpenAI made codex work: be verbose, explicit, and do not forget process steps that you do automatically, i.e. instruct the cross reference checks, the comparison, the second opinion... all these are what helps humans come to great results.
Generally though I must admit they are much smarter than me already.
Except for reptile-brain and social functions. LLMs aren't an entity (despite all the fakery/simulation and predatory CEOs claiming it is so) so they don't have fear of messing up, reputation damage, liability.
Or how well standard procedures were translated into actionable instruction sets. This is what I found in practice to be working well. It's also how OpenAI made codex work: be verbose, explicit, and do not forget process steps that you do automatically, i.e. instruct the cross reference checks, the comparison, the second opinion... all these are what helps humans come to great results.
Except for reptile-brain and social functions. LLMs aren't an entity (despite all the fakery/simulation and predatory CEOs claiming it is so) so they don't have fear of messing up, reputation damage, liability.