You can use local AI as a sort of "private companion". I have a few smaller versions on my smartphone, they aren't as great as the online versions, and run slower... but you decide the system prompt (not the company behind it), and they work just fine to bounce ideas.
NotebookLM is a great tool to interact with large amounts of data. You can bet Google is using every interaction to train their LLMs, everything you say is going to be analyzed, classified, and fed as some form of training, hopefully anonymized (...but have you read their privacy policy? I haven't, "accept"...).
All chatbots are prompted by the company to be somewhat sycophantic so you come back, the cases where they were "too sycophantic", were just a mistake in dialing it too far. Again, can avoid that with your own system prompt... or at least add an initial prompt in config, if you have the option, to somewhat counteract the company's prompt.
If you want serendipity, you can ask a chatbot to be more spontaneous and suggest more random things. They're generally happy to oblige... but the company ones are cut short on anything that could even remotely be considered as "harmful". That includes NSFW, medical, some chemistry and physics, random hypotheticals, and so on.
Therapists are not supposed to bond with their patients. If you find one whom you can stand for half an hour, then take what you can and leave the rest, they're not to be your friend or lover. The fact that chatbots let people fall in love with them, is a huge fail from a therapy point of view.
Bouncing ideas back and forth is a good use though. A good prompt I've seen recently:
If you worry about privacy, you can run an LLM locally, but it won't be fast, and you'd need extra steps to enable search.