When I talk to other people, it's often about more than just information: it's about addressing feelings, needs and interpersonal dynamics. But what does it look like when an AI like ChatGPT becomes a dialog partner? Or when I use AI to prepare answers? Can it react in a more “human” way, even if it doesn't have feelings or needs itself? Can it even put my feelings aside in favor of those of my vis-à-vis and still respond to my and his or her needs? An experiment I conducted shows how a bot's responses could appear more “human.” My experiment: bringing feelings and needs to the fore In a dialog, people often intuitively respond to the feelings and basic needs of their counterpart. This interpersonal level has been largely absent from communication with AI so far. But with a targeted approach, I wanted to change exactly that: Identification of needs: Each request is analyzed for the underlying needs – be it trust, security, creativity or orientation. Emotional language...
Thomas Steiner's coffee-break posts about A.I.