







AdForum interviewed VSA Chief Growth Officer Ariadna Navarro to get her thoughts on the future of AI and creativity, and how companies can set ethical, responsible guardrails around the technology. Check out an excerpt below.
It’s a little of both right now. We encourage everyone to use it, but with guardrails and guidelines to keep the work honest, human, and original. At the moment, AI is best suited to things like exploration, evaluation, and experimentation. It can reliably accelerate existing processes—from research and analysis to idea generation and content creation—but it’s equally susceptible to misinformation, redundancies, and both legal and ethical issues that we’re only beginning to understand.
In other words, AI is an exciting, new option in our toolkit, but it’s nowhere near a replacement for any of the ways we work yet.
The accessibility factor is core to AI’s success. It’s incredibly rare for a tool to possess both the low barriers to entry and the near-infinite possibilities that AI represents. The open format and widespread availability of this generation’s AI tools have given them access to an unheard-of volume of perspectives, permutations, and information that’s driving its rapid evolution, but that also comes with increasing risk.
While the exponential growth and innovation of these nascent phases is exciting, it’s also why we can’t afford to lose any more ground in understanding and safeguarding against the dangers and threats it could pose.