Although we see intuitiveness as key to breaking into the mainstream, AI companies shouldn’t turn into black boxes just to reduce friction. In fact, we think they should aim to be as transparent as possible on their inner workings, for two reasons: trust, and creative control.
Trust, we expect, will be a prerequisite for any company dabbling with AI-enabled creativity. After all, professionals may be understandably wary of any technology that they think could, ultimately, make them obsolete. Specialized tools should look to augment creators, not replace them, and make those intentions clear if they want to attract customers in the first place.
Then comes creative control. On that front, advanced users are likely to want to get deeper into the nuts and bolts of the technologies they use on a day-to-day basis so they can customize the output of their AI to fit their specific needs.
Accordingly, and while automation is desirable, we think AI-enabled content creation should maintain human-in-the-loop capabilities, whereby a human operator can certify, and potentially course-correct, the adequacy of an AI’s work — for example, Promethan’s users can override a semantic connection between two assets if they consider it irrelevant. In turn, empowering users with more granular oversight, rather than one-size-fits-all output, should lead to greater trust, too.