How is Velo approaching AI? We’re proceeding with caution, and so should you!
History repeats itself. AI isn’t the first shift that’s forced agencies, media and marketers to adapt.
For instance, during the 80s, the introduction of computers and accessible software reinvented desktop publishing and photo editing. It replaced manually laying out paper and photos on a desktop and retouching with an actual airbrush. How we talk about it has changed too – what was ‘airbrushed’ has become ‘Photoshopped’, symbolising the extent to which studios and artists had to adapt.
Before the (now infamous) Millennium, the internet was in its infancy as a marketing channel. The early years of ‘eCommerce’ and ‘new media design’ were a far cry from what it has become today. The launch of iPads heralded a whole new phase of web development and design. Social media has changed how we act towards each other and communicate as people and marketers. Sadly, I am old enough to remember them all.
AI is just another round of change and not to be feared.
(And it won’t be the last whirlwind of transformation. Brace yourselves for the rise of AR and VR next year, with more innovations waiting in the wings.)
In this article, we will share how we’re adapting to AI as an agency, a team and as individuals.
There are four categories of artificial intelligence:
1. Reactive: This is where it merely reacts to current scenarios and cannot rely on taught or recalled data to make decisions in the present.
Probably the most famous is ‘IBM Deep Blue’, a computer that beat the world chess master in 1997. Other examples of reactive AI are Netflix‘s recommendation engine and email spam filters.
2. Limited memory: This is where AI is at this point of writing. The AI model retains some information from previously observed events and can build knowledge using that memory; it then makes improved decisions with the help of past data. Examples of this are autonomous vehicles and machine learning, including tools like Chat GPT.
The above is where we are today.
3. Theory of mind: This is the idea that you can have machines that acquire decision-making capabilities equal to that of humans, with the ability to recognise the mental state of others. This is a way off.
4. Self-aware: This is sentient, human-level intelligence, otherwise known as artificial general intelligence, or AGI. When it’s advanced beyond that, it’s called artificial superintelligence, or ASI. The scare stories in the media focus on this level.
Developing AI skills requires constant learning to keep up with changes and developments. It is no different from the early stages of the internet as a marketing channel – the only variation is that the speed of change is more rapid. This is why we’re committed to developing continuous knowledge and why we have a dedicated cross-agency Velo Labs team to monitor it constantly.
Will we offer new services?
Yes. We always evolve what we’re doing and have since the agency’s inception in 2010. For example, we’ll adjust our coding recommendation appropriately where and when AI affects search rankings. We adapt to provide the expertise marketers targeting a niche need to know.
Does our use of AI mean we will be quicker to produce work?
While it can be a springboard for some tasks, it’s not the final answer – you still need a human to shape the input and the output. Its output can often be nonsensical or – at the very least – not fit for purpose. And it always requires checking.
Does that mean you will charge less?
Most of our work is strategic and creative, which makes AI primarily an assistant-style tool for the knowledge workers who do this type of work. We are always accurate about how long things take, so where we can work quicker, we’ll charge less.
Is that it?
No. We’re committed to watching, growing and experimenting with AI – and sharing our lessons. And we’ll evolve our approach within our framework as we learn. After all, many of the references in this document will be outdated in the time it took you to read it!
Our principles guide us as we learn about AI:
We must use it honestly and responsibly. We must do the right thing. We are being open and honest about when and how we use it.
AI must always have a human at its heart: We’re currently identifying the gaps in talent and technology and upskilling or reskilling our team. This is also why we have a dedicated team to investigate, learn and share.
We’re responsible for the output of AI from concept to reality. Every time. Without fail.
We explore and acknowledge AI’s limitations so that we use it sensibly, appropriately, responsibly and intelligently. And we must keep a record of where we have used it to navigate future changes.
We’re letting our team experiment – with a feedback loop in place – to accelerate the company’s learning. But they do so with some guard rails:
1. Don’t use AI without visibility. We keep a log of every time AI has been used in client work to ensure we have transparency, compliance and records. This means we can trace what we have done when and if regulation is brought together. It is the very definition of “proceed with caution”; if we use it for anything, we keep track of it.
2. Don’t use AI to create complete works. It is not as good as our experts. It is a launch pad for expertise, not a way of cutting corners. With issues such as copyright, we must be very careful about its use.
3. Do not trust the output. AI is only as good as the information it draws, so we must question its sources. Understand how the large language model (LLM) works and what it does with the information it receives.
These guidelines are part of our company policy.
To keep us focused, we are currently assessing five specific ways AI can help.
1. Running the business: Can an AI help the administration of a business? Enabling smart bank reconciliation in our accounts platform, Xero, is one example of how we are leveraging AI to perform certain admin tasks.
2. Building better relationships: Could AI minimise the admin tasks around the great client relationships we have? We’re looking at tools such as transcription (Otter.ai) or meeting notes (firefly.ai) and are sharing what we learn with our clients.
3. Enhancing our creativity: We already know that AI will not replace strategic or creative ideas. It is a tool that acts like a springboard to accelerate progress and help articulate an idea. Tools like Midjourney to create mood boards or Adcreative.ai to generate scamps for planning ads (the outputs are not good enough on their own) can aid the creative process, but it doesn’t replace creatives.
4. Getting deeper insight from data: We’re using tools to craft and interrogate data to build tables and graphs and identify significant anomalies and patterns to aid the interpretation of qualitative research: ChatGPT and its ability to semantically analyse text help here when prompted properly.
5. Weaving into our existing toolset: AI is becoming embedded in the tools we use or want to use, such as Generative Fill in Photoshop and within MarTech platforms. Tools like Otter.ai for transcription, translation aids (Weglot) or sub-editing assistance via ChatGPT all have a part to play, but only when appropriately checked and used with caution. We’re part of beta trials and actively provide feedback to develop these services.
As we build the skills we need as an agency, our team is experimenting, learning and sharing what they conclude with each other and our clients. We’ve concluded that AI can help accelerate groundwork but always needs human skill. This is from well-constructed prompts to appraising the results – particularly in accuracy – and turning them into something useful.
Our mantra is to ‘proceed with caution’, as there are areas that need careful consideration:
Copyright and IP are massive risks.
All AI models were trained on a dataset. Generative AI relies on this training data to produce its output. But who owns this output? Is it the original data owner used to train the model, the platform itself or you who prompted it? This is not defined yet. When it comes to generative AI for extending images, this is a particularly dangerous area – for example, you would not use a Getty Image without consulting the license. This will need regulation, with many ongoing lawsuits and different approaches in the UK, the EU and the US adding further complexity.
There is some light on this issue in tools such as Photoshop Generative Fill. You can use it confidently because it is trained on Adobe’s stock library, meaning that whatever you create won’t use anyone else’s work or intellectual property. Assuming that you have a licence for Adobe stock, it’s perfectly safe to use. And that they are so confident that if you get anything that says you’ve breached copyright, they’ll cover your legal bills.
We do not use generative AI to produce final artwork and will only once this area is clarified or we can track and be confident of the license management of how it was generated.
Quality is not as good as it looks, so always look properly.
The reality is that many LLMs use sources that are not accurate and have been scraped from the internet. Wikipedia, for example, is known for inaccuracies, which would be passed into the models. We’re seeing a few different types of error to look out for – sentence contradiction (where it generates a sentence that contradicts the previous one), a prompt contradiction (it doesn’t do what you ask) and factual contradiction (when fictitious information is presented as fact).
And then the last one is just bizarre – ‘hallucinations’, where the learning model has become confused and provides information that has no pertinence to what has been prompted.
Some generative tools create low-resolution parts of your image, distortions, accidental additional fingers on people or additional white lines. Transcription software is not immune from typos or complete misunderstandings. While these tools are clever and good on the surface, you must be cautious when using them for more precise work, such as exhibition stands.
Everything always needs checking by someone who is experienced. Every time.
Confidential information needs to stay out.
With many tools, when you add prompts, you feed their LLM, especially on free versions. This means if you ask it to rewrite confidential data, you’ve permitted it to keep a copy of your confidential data. It’s not clever, but it’s not obvious either. Confidential means confidential.
Generative AI is subject to inherent bias.
We have observed that image generation can show societal bias, particularly around diversity. It is something we’re passionate about, but it shows again how important an expert is in the process of evaluating the output.
AI is the latest technology to disrupt media and marketing. It won’t be the last. Time to proceed with caution.
For more details on our approach to AI, get in touch. For Velo clients, we’re providing workshops with a deeper dive to share our insights.