pull down to refresh
A simple form - for a pre-post setup - would be
<!-- your post / letter / email -->
----
What is the profile of the above author?
Focus on:
- Key characteristics
- Sentiment / state of mind
- Potential affiliations
For each profiled attribute, shortly explain what the analysis is based upon.Notes:
- It doesn't really work well on tweets or other very short form, but you can feed it context, just make sure it doesn't go analyze things that others wrote. I use that on (toxic) GitHub comment threads on an Issue/PR, though I have not yet found the most optimal representation for putting deep comment trees into prompts.
- Focus list depends on your personal situation and sometimes even your post (fwiw the above isn't my own set, but I could use this: if a bot would flag me as an unstable potential member of a militant group I would probably want to change my post)
- Models: Grok is really good at this, the last 2 Kimi iterations too. Those seem to be trained to give much more straightforward answers than the usual suspects. Qwen is acceptable at times but even in the latest versions carries relatively high hallucination risk. Claude and GPT not so much - these tend to downplay / be too PC to give a straight answer and you'll need to read between lines once more, which was the thing you were trying to outsource. If you need a private thing, PPQ offers an enclaved Kimi K2.6: (
private/kimi-k2-6), it's pretty good in cost/benefit if you think you need this. - Ideally you persist everything. I have made a little framework for myself where I get structured output (json) to make it easier to store and I version my posts. I put this in a "small" private data lake that is searchable with a local LLM; ie. "What did I say about xyz in the past?"
With this, aren't you losing your original expression and letting the AI be both the reviewer and author of a final product without authorship? From what I understand, when you run it through this filter, it rewrites or points out things that can help create a profile for you, but if it's in a public environment, why lose that authorship, that essence?
I'm not sure that I grasp what you mean exactly.
There's no rewriting going on in that prompt, only analysis. What is done with the analysis is up to the author, and you can ignore it, just like you can ignore the output from a human reviewer. Sometimes you may want the "wrong" message to go out and at other times, maybe not. Awareness is the key thing here, not the rewrite; I rarely rewrite.
If you mean that this implies that sometimes a pure emotion gets cloaked after it got flagged up and some edits were done (by the author), then yes, I think that that's true, though I don't think that that's per definition a bad thing. Being an open book is dangerous in a world where every byte is retained forever and can and will be used against you by the next guys the masses vote into power over the surveillance apparatus - I think that it would be naive to think that you have a qualifying trait that protects you from all your potential future overlords.
Storing the report itself doesn't make it a final product either. It's data, not information. Information would be the report you run over all the feedback you've received the past month, check for trends, and think about what you're feeling and why. Run a little self-assessment. Even then, isn't the final product whatever you decide do with the signal rather than the LLM output itself?
If you mean that this implies that sometimes a pure emotion gets cloaked after it got flagged up and some edits were done (by the author)
that is a little bit what I was meaning
There's no rewriting going on in that prompt, only analysis
ty for clarify
Even then, isn't the final product whatever you decide do with the signal rather than the LLM output itself?
I agree, but I think too if you polite or change your ongoing approach based on a LLM judgement, programmed based on your your will, you’ll need to ever check if it’s make it job properly to don’t up false flags on your emotions.
I agree, but I think too if you polite or change your ongoing approach based on a LLM judgement, programmed based on your your will, you’ll need to ever check if it’s make it job properly to don’t up false flags on your emotions.
Never take LLM stuff at face value. That's why that last line in the example prompt above is important: you want to know the rationale, so that you can call bullshit. For example, the thing I joked about in that parallel comment about Grok's analysis that ~AI is an advanced LLM usage community is of course total bs. The bottom line is that nothing from an LLM can ever be valuable if it wasn't reviewed. The action lending value to LLM output is someone actually reading it and doing something with it. Like Schrödinger's cat, but for an attribute of value.
Also, you can still be a savage mf without being profiled like an enemy of the future state, I think, because that's generally what I do. Whenever I edit, I mostly just make it more direct. No one needs me to be PC, the rest of the world can take that role and hate my guts.
I understand your worry though; I hesitated to bring this up because I don't want to give people the idea that you should trust LLMs. The profile builders do though, because it's the only means they have to truly dragnet surveil - they cannot read everything I ever wrote with human eyes unless I'm directly targeted, but they can read the digest, and do the targeting based off the generated profile. [1]
Those types of implementations are threats, including corporate that wants to profile you to sell you ads or YT recommendations and then magically this profile ends up in the hands of guys that control other guys that ride around with guns to fuck you up. The only thing that changes based on who is in charge is who gets fucked up; there's always someone getting fucked. Divide and conquer.
So no, you may not always get the pure emotions on the interwebz. If you want that, let's have a rum & coke in a rum shop off the road in the middle of nowhere on some island in the Carib that isn't seeded with spooks (getting harder to find these places). I'll tell you how I really feel.
How do we think the govt is processing all these social media profiles you have to pass on a US entry form? They just feed it into GPT embeddings and the Claude integration maintained by the company we shall not name, and they already have the data. They likely already have the base profile too and just need to process some recent stuff, see if anything has changed. That's how I'd build such a system (and I used to build massive systems, tho not for surveillance, for a very long time.) Bottom line you don't want to crunch massive data just in time; you want to crunch much of it as it comes in. What you do want is check for change indicators and then re-process if something is off. ↩
Could you give an example of a profiling prompt? I guess I can guess at it, but I'd be grateful to have sense of what you consider a good profiling prompt.