Within the span of a year, generative AI has evolved from a mere buzzword to a transformative technology that almost every industry wants to get behind.

Especially in the marketing domain, AI has made a profound and tangible impact, reshaping how we approach our daily marketing operations. While previously AI was mostly adopted for automation, now it’s continuously being leveraged to augment human creativity and decision-making.  

 These advancements aren’t just theoretical. According to Gartner, 63% of global marketing leaders are planning to invest in generative AI. However, this growing adoption of genAI brings new concerns and challenges for marketing organisations, specifically around ethics and security.  

Addressing these challenges will be crucial in the effective implementation of AI going forward. Businesses must have robust strategies and policies in place that ensure that AI is not just being implemented for the sake of technology but for tangible, measurable business outcomes. Let’s have an in-depth look at these challenges and how marketing organisations can effectively get ahead of them.  

The complexities of AI tools

It’s astonishing to see how far genAI technology has come in such a short time, in terms of capabilities and features. However, this rapid innovation is already creating a sense of ‘confusion’ among marketing leaders. What tools should we use? Which ones provide the best ROI?  

With Google, Microsoft, OpenAI, Anthropic, and many other independent developers innovating their own AI technology, organisations will likely scratch their heads with so many options available. Navigating this complexity required specific ‘need recognition’. For marketing leaders, it is imperative to understand which areas of your business need AI implementation. Is it content production, design, administration, social media, predictive analytics, customer management, or any other segment? It’s important to answer this question internally before writing the first check for a subscription.  

Also, organisations need to understand their current workforce capability in using AI tools. It’s true that genAI tools are not as technical as most other technologies we tend to use within a business environment. But there’s still a learning curve. We’ve seen on several occasions that using tools like ChatGPT to its fullest potential requires users who have a certain level of knowledge and understanding of natural language processing (NLP). In fact, a recent study from Udemy shows that ChatGPT is classified as a tech skill by most businesses. So, understanding your workforce capabilities around genAI and developing training programs to upskill employees on this latest technology is critical.  

The phenomenon of ‘AI Shame’

We recently collaborated with world-leading media intelligence provider Meltwater on a webinar with hundreds of attendees keen to know more about infusing AI into their marketing & PR workflows. One of the consistent questions we came across from marketing professionals was on how to overcome ‘AI shame.’  

Marketers are wary of the fine line between leveraging AI for efficiency and losing the human touch that resonates with audiences. There is a persistent concern that using AI technology might make them look less authentic and diminish their brand ethos. Not to mention the sense that they are less of a professional by using it in their daily craft. 

There is a really important point here – that by using AI, marketers are not handing over creativity or decision making to a bot. Using AI is like using Microsoft Office over writing something by hand. It is a tremendous tool that makes it easier to draft, edit and share written content. GenAI is not replacing people, but making it easier for us to create high quality work 

AI shame shouldn’t get in the way of organisations leveraging the latest technology. AI adoption will continue to grow, not just in marketing but across the board. So, if you’re not leveraging AI tools, your competition will – and it’ll ultimately impact your competitiveness in the market.  

So, marketing professionals need to have an open, transparent, and honest conversation within their businesses. The need for AI adoption and measurable policies must be communicated clearly throughout the organisation. At the same time, marketing leaders should have an open conversation about the potential dangers of AI, especially around its ethical usage and security concerns. This is where the need for a robust policy becomes evident.  

Developing an AI policy for ethical marketing

An effective AI policy can guide businesses to ensure the ethical and effective implementation of AI within the organisation. It includes several aspects, from data privacy and accountability to ethical content creation, ensuring AI’s responsible use aligns with an organisation’s values and legal requirements. 

Foremost, AI policy should not be thought of as a new separate item. Rather, it should be considered in the context of all the existing ethical and regulatory measures that an organisation must comply with. For example, GDPR and data protection. Or discriminatory behaviour, Or non-disclosure ageements etc. A company will already have policies in place on these and many other areas. The starting point is these current policies and then to consider the implications of AI in each of these contexts.  

Your AI policy should also critically consider take privacy and security. With AI relying heavily on data, safeguarding user information against breaches and misuse is essential. This includes transparent data collection practices, strict access protocols, and adherence to privacy laws.

Marketing companies generally have to deal with confidential and private client data. It’s imperative to ensure that those data are not used within the scope of AI tools. At the same time, businesses must make sure that such tools are being used with proper cyber hygiene and security fundamentals in place. Practices like multi-factor authentication, privileged access management, and data privacy requirements should be in place.

It’s also important to clearly outline accountability in your AI policy. Regardless of how the technology is used within a company, the accountability should always fall on the human workforce. This is something that should be clearly defined and communicated via your policy. 

Reliability

Addressing reliability concerns is also crucial, especially when a significant portion of marketers remain sceptical about AI’s accuracy. An AI policy should mandate rigorous fact-checking and validation processes to ensure the reliability of AI-generated content and decisions. Additionally, transparency about AI’s role in marketing strategies and content creation is vital. Clear communication about the use of AI helps maintain trust with the audience and avoids misleading perceptions. 

Lastly, your AI policy document shouldn’t be a one-time rule book. Rather, it should be revisited monthly or quarterly to ensure that your practices and guidelines are up-to-date with the latest innovations.

Overall, developing a comprehensive AI policy is key to managing the challenges of this dynamic technology that continues to unfold. By embracing ethical guidelines, prioritising transparency, and fostering AI literacy, marketers can leverage AI’s transformative power responsibly. This approach not only safeguards brand integrity and customer trust but also positions organisations at the forefront of ethical innovation in the dynamic world of AI-driven marketing. 

Robin Campbell-Burt, Code Red Comms
CEO at Code Red Comms | + posts

With over 20 years of experience in PR & marketing, I have a huge depth of knowledge in campaign strategy, reputation management, and brand building for my clients.

I now bring this experience to the fore, leading a team of 20 at Code Red Comms, a specialist cybersecurity PR and marketing agency. We work with some of the biggest companies in our sector, as well as upcoming innovators entering the space, to make sure that they are visible and credible to key decision-makers.