image of business professional reviewing analytics (for a hr tech)

Trust in Artificial Intelligence: Why the Debate Matters

Author: Ralph WernerPublished: May 29, 2024

Trust in Artificial Intelligence: Why the Debate Matters

It’s remarkable how the conversation around trust in AI is unfolding— and how many facets it has.

Trust and Added Value

I constantly come across posts where people question the true value AI delivers in automation. Many users rave about the results they get from ChatGPT, yet most describe the experience like this: “ChatGPT gives me a solid first draft that I can build on.” Almost everyone uses the AI for suggestions, rarely for final texts, and almost never without a human sign-off. That’s because ChatGPT output often contains errors and weaknesses, as countless articles have already shown.

Automated Processes and Output Quality

Fully automated processes that rely solely on ChatGPT or other large language models (LLMs) are rare. When applications do so, the texts are usually “cautious” and template-like—more reminiscent of boilerplate than of creative writing. Skepticism about a machine’s ability to produce error-free copy remains high.

Our standards skyrocket when we interact with computers. If an AI generates something we asked for via prompt, we’re quick to spot mistakes. While we easily forgive our own typos, we expect a statistical model trained on millions of samples to be flawless.

A classic example: the German “ß.” A Swiss German user must explicitly tell the AI to use ss instead, because what’s correct for one audience is wrong for another.

Challenges in Automating Text

Traditional cold-outreach workflows rely on templates to automate communication with leads, customers, and candidates. The template itself enforces quality, but the result feels impersonal and ignores the recipient. Such messages are seldom good; recipients are more annoyed than pleased.

Teams that deploy AI in automated flows understandably want to check quality. That often leads to every AI-generated message being reviewed and rewritten—burning the very time savings automation promised. How can you safeguard quality without losing automation’s benefits?

Quality Control and a Pragmatic Approach

In real life, some things must work 100 percent of the time—the ABS system or airbags in a car undergo relentless testing. An AI-generated text is rarely life-critical. Most mistakes fall into the “unprofessional” bucket. Perfectionists in particular find it hard to trust AI here.

Wouldn’t it be wiser to spend work hours on genuine human-to-human interaction—where it truly matters—instead of hand-sending standardized messages? How much time are you willing to invest in checking AI copy? Should every single message be reviewed—or only spot-checked?

Borrowing from manufacturing, you can inspect a random sample and tweak the production parameters. For AI text, that means reviewing some outputs, refining your prompts, and repeating later. At Psychological AI, our clients have had strong results with this approach: continuous feedback and prompt adjustments steadily improve the model.

How Psychological AI Optimizes AI Text

We address the generic weaknesses of ChatGPT output by iteratively refining every LLM-generated text. For each recipient we craft an optimal version and analyze its impact. Statistical analyses show these optimized texts do make a difference.

The Road Ahead

We’re at the start of an exciting journey. With ongoing improvements and smart workflows, we can boost trust in AI and significantly influence processes like active sourcing and lead generation. Speaking to every recipient in their own linguistic world is an exhilarating goal—and we’re getting closer, one iteration at a time.