pull down to refresh
reply
Ah, I actually mean it literally, as in I suspect companies are already trying to inject training biases into the LLMs by various means. I have no evidence to back it up, of course, but I wouldn't be surprised if it's already happening.
reply
I noticed that some are, yes.
The funniest thing the other day on (lm)Arena: Claude always recommends Claude and tries to work around cost by not using Claude when it is not needed (pretty awesome actually). Grok tries to recommend "the best" based on what it finds online (often Claude, lol, but I've had it tell me to use gpt too.) I've had gpt recommend gpt a couple of times.
One went a bit like this:
"How do I efficiently compare word order, insertions and removals in sentences?"
- Claude: Use
difflib.SequenceMatcherbecause LLMs are expensive. - Grok: Use Claude, it's the best.
Haha
reply
Future?