When I talk to other people, it's often about more than just information: it's about addressing feelings, needs and interpersonal dynamics. But what does it look like when an AI like ChatGPT becomes a dialog partner? Or when I use AI to prepare answers? Can it react in a more “human” way, even if it doesn't have feelings or needs itself? Can it even put my feelings aside in favor of those of my vis-à-vis and still respond to my and his or her needs? An experiment I conducted shows how a bot's responses could appear more “human.” My experiment: bringing feelings and needs to the fore In a dialog, people often intuitively respond to the feelings and basic needs of their counterpart. This interpersonal level has been largely absent from communication with AI so far. But with a targeted approach, I wanted to change exactly that: Identification of needs: Each request is analyzed for the underlying needs – be it trust, security, creativity or orientation. Emotional language...
What if writing a scientific paper became as effortless as conducting the research itself? Picture this: a group of researchers wraps up an insightful discussion, reviewing their experiment's methodology, findings, and implications. They leave the meeting energized by the intellectual exchange, and by the time they sit back at their desks, a polished draft of their paper awaits them in their inbox. With AI-powered tools, this isn’t science fiction anymore—it’s a rapidly approaching reality. I recently took this vision for a test drive, and the results left me astounded. Let me take you through how I turned a lecture into a streamlined research output, showcasing the power of AI to redefine the boundaries of research, teaching, and publication. Read the full essay on my LinkedIn profile.