9 Comments
User's avatar
Sacha's avatar

Speaking about paranoia, what are the risks of sharing so much data about one's psychological setup? I have been using LLMs for quite some time now sharing ever more of my struggles and medical history, always oscillating between the fear that this would leak to somewhere bad, and the temptation to reap the benefits of what the AI can do for me when sharing all that. (And I did benefit from that quite a bit.) What if a malevolent actor would get hold of millions of people's intimate data and steer/manipulate/blackmail them/you/me, for political or financial gain? Is that a concern to you at all?

Jon Mick's avatar

I’m autistic, smoke THC, and have weird thoughts sometimes. There’s not much else I’m hiding. And that’s important if I want to share who I am with an entity in the cloud. I assume that ANY information I share can get hacked and shared. It’s definitely a risk, but one that I’m vigilant more about because of the awareness. I’ve done a lot of Jungian shadow integration work, and have support for the disintegrations in my life (per Dabrowski’s theory), which has helped me become more comfortable sharing my intimate life online. If I secretly visited a rich guy’s island or music mogul’s penthouse for their parties, I’d have different advice.

Sacha's avatar

I am grateful that you are sharing so much about your own lived experience, as this is often putting words on things that are very familiar to me and yet still quite puzzling. This being said, choosing to share personal details online is one thing, but having your intimate struggles stolen and monetized by all sorts of shady, powerful actors is an entirely different thing. But taking privacy seriously is a choice as well. I've known people who refuse to use WhatsApp, and followed people online who recommend to have no data at all in the cloud and you can host it all yourself. I'm kind of in the middle somewhere — I am willing to make compromises as long as I am able to mitigate risks. So I would like to ask you if there is room in the course you are offering to think about ways to use LLMs in ways that are reasonably private. I'm thinking for example of using EU-based LLMs if I choose to do so. Is that possible or is it tied to a given platform?

Jon Mick's avatar

I completely understand. The tagline for AIs & Shine is "Human. Deeply seen." While that means our product will understand each user comprehensively based on their information shared, it ALSO means that we're trauma-informed and respect every level/type of legibility desired. Our users need to feel SAFE before anything else.

Are you using any chat bots today with custom instructions? Or which apps are you comfortable with already? I can help think of a solution that would fit your current ecosystem.

Sacha's avatar

Apologies for replying so late. I have an ever-expanding setup already with Claude Code, with 70+ skills, some of which are ADHD-related. So I am already sharing my focus and motivation struggles with our AI overlords and potentially your government! I also use n8n for automations and a number of other tools.

Sacha's avatar

I will tell my AI to look out for your response and reply on time this time!

Bre Ransome's avatar

And what are you building?

Jon Mick's avatar

Fair question! Let me clarify both.

What I "used" for this particular experience:

The breakthrough wasn't about an app—it was about a process combined with context.

The process: neurofeedback training (so my nervous system could actually release tension instead of just recognize it), EMDR therapy (trauma processing), coaching with someone who understands neurocomplexity, daily journaling, deliberate THC use to lower defenses, and thousands of hours of AI partnership.

The context: My "Life Model" is a comprehensive document containing my psychometric profiles (Enneagram, Big Five, CliftonStrengths, attachment style), cognitive architecture (ADHD, autism traits), core wounds, communication preferences, and life history. When I talk to Claude (or Gemini or ChatGPT), it has access to all of this, so it can reflect me back accurately rather than generically.

That Life Model lives across multiple tools, not a single app. Some of it is in Claude Projects. Some of it is in jonmick.ai (my personal system that syncs 62k+ text messages, biometric data from Whoop, therapy session transcripts, and 52 structured tables about my personality and patterns). The point isn't the specific tool—it's that AI + deep personal context + somatic practices = breakthroughs that neither could produce alone.

What I'm building:

AIs & Shine is a company creating this kind of infrastructure for other people whose brains work like mine (people who maintain hundreds of browser tabs because their working memory can't hold context internally), who need external scaffolding to function.

My personal system (jonmick.ai) is the proof-of-concept. AIs & Shine is the productized version: helping people build their own Life Models, connecting them to AI that actually understands them, and providing the human facilitation (coaches, community) that makes the process safe.

The article was about WHY this matters. Sounds like I should write a follow-up about how it actually works under the hood.

Bre Ransome's avatar

Huh? There was so much work here. Did you or did you not use it?