Return to site

What Really Happened? Unravelling the Temu Promo Scandal – No #AprilFools Here!

GMW lead trainer Dan Sodergren gets asked to go on Radio 5 Live...

The Temu Scandal.

Protecting Personal Data in the Age of AI Marketing

Lead digital marketing trainer for Great Marketing Works, Dan Sodergren was just asked to be on Radio 5 Live and talk about technology, the future of marketing and about the Temu promotion scandal.

broken image

Are you aware that your personal data might be at risk every time you interact with the internet, especially with AI-driven marketing practices? The digital age has brought incredible advancements, but at what cost to our privacy? Dan Sodegren (pictured above) a technology expert, sheds light on some alarming practices by companies using AI in marketing and highlights the importance of safeguarding our data privacy.

Understanding the Value of Your Data

In today's world, if you're not paying for a product, you are the product. This notion has become significantly more critical with the advent of AI in marketing, as highlighted by technology expert Dan Sodegren.

Billion-dollar companies, like chinese online retailer and ecomm discount superstore Temu, were prepared to utilise vast amounts of consumer data, and maybe even more, during their now infamous promotion.

This promotion with it’s baffling terms and conditions, thankfully whistle blown by people like futurist Dan Sodergren, hints at a future where personal information and even your own digital likeness becomes a commodity for targeted advertising and much worse, without many realising the extent of it.

broken image

The Dangers of Not Reading Terms and Conditions

One of the biggest takeaways from Sodegren's insights is the importance of reading terms and conditions, a practice most overlook or simply do not have time to do. These documents can hide permissions that allow companies to use your data, including your likeness and voice, in ways you might never have imagined. This could involve creating deep fakes or making you a spokesperson for products or ideologies without your consent.

Temu's case illustrates a broader issue where companies draft overly broad terms and conditions that could permit them to exploit personal data in unprecedented ways. While Temu later backtracked after public outcry, this should serve as a wake-up call to always scrutinise the fine print before consenting.

Laws Need to Be Stricter

Current laws and regulations seem insufficient to protect consumers against the potential misuse of AI in marketing. Sodergren advocates for stricter laws to prevent abuses of personal data. Until legislative measures catch up, the responsibility lies with individuals to be discerning about what permissions they grant to companies

The conversation around data privacy needs to evolve beyond "buyer beware." As AI marketing practices become more sophisticated, there needs to be a "digital green cross code" – a set of principles and best practices for navigating our digital lives safely

It’s not just stricter governmental laws but awareness and education are also crucial in combating the exploitation of personal data. Many individuals freely give away their data without understanding the potential consequences. There needs to be a societal shift in how we value and protect our personal information.

Companies like Facebook and Google have long benefited from using personal data for profit. This isn't necessarily wrong, but consumers should be more informed about what they're consenting to. Changing perceptions about the value of data and encouraging more sceptical engagement with tech companies' policies are steps in the right direction.

Data Is a Double-edged Sword

While AI in marketing offers democratised access and opportunities for businesses, it also poses risks to individuals' privacy and autonomy. The technology that allows for personalised advertising can also enable companies to clone voices and images, illustrating the dual nature of technological advancement.

OpenAI has now famously backtracked on allowing into the public realm a technology Futurist Dan Sodergren mentioned in his radio five interview. This technology allows cloning of someone’s voice with just 15 seconds of information.

This is less time than you reading the last paragraph out loud!! By the end of that - and Dan’s interview - someone / anyone with the technology could make you say anything they wanted. With lip sync technology from people like Hey Gen, which can be used from just one picture, a digital clone of you can be made.

This clone can then be made to say anything about anything. Especially, if you have sold your image rights forever to a chinese manufacturer for £40!!! Which is what the Temu promo terms and conditions implied.

AI might be amazing but…

By understanding both the benefits and dangers of AI and data usage in marketing, consumers can make more informed decisions about their digital presence. The balance between leveraging AI for innovation and protecting personal privacy is delicate and requires conscientious action from both companies and individuals.

The age of AI marketing has brought forth incredible opportunities for innovation but has also highlighted significant gaps in our data privacy protections. As we navigate this digital landscape, it's imperative to be conscious of the information we share and vigilant in protecting our personal data. Through education, stricter laws, and a collective shift in how we view our digital footprints, we can safeguard our privacy and take full advantage of the digital age without compromising our personal integrity.

Training people in using AI to help do their marketing with them and save them hours of work , like we do at the AI Marketing Course, is one thing. But using technology to abuse people in ways they don’t understand yet is quite the other.

 

broken image

 

References for the piece:

  1. BBC News. (2023b). The rapid rise of AI and what it means for business. https://www.bbc.co.uk/news/business-68563339
  2. MSN. (2020). AI-generated deepfake videos, voice cloning emerge as potential threats during election season. https://www.msn.com/en-in/money/technology/ai-generated-deepfake-videos-voice-cloning-emerge-as-potential-threats-during-election-season/ar-BB1kSqwF
  3. Siddique, H. (2023). 'Addictive, absurdly cheap and controversial': The rise of China's Temu app. The Guardian. https://www.theguardian.com/world/2023/oct/06/addictive-absurdly-cheap-and-controversial-the-rise-of-chinas-temu-app
  4. Telecom Review. (n.d.). Caution ahead: Exploring the benefits and dangers of AI voice cloning. https://www.telecomreview.com/articles/reports-and-coverage/7263-caution-ahead-exploring-the-benefits-and-dangers-of-ai-voice-cloning
  5. Wiggers, K. (2024). OpenAI launches custom voice engine that can clone speech in seconds. TechCrunch. https://techcrunch.com/2024/03/29/openai-custom-voice-engine-preview/
  6. My two videos from the Radio 5 piece. One on LinkedIn and the other on YouTube with the full interview.
  7. The AI Marketing Course https://www.aimarketingcourse.co.uk

FAQs

1. Why is reading terms and conditions important?

Reading terms and conditions is crucial because they can contain permissions that allow companies to use your data in ways you might not agree with, including using your likeness or creating deepfakes.

2. What can individuals do to protect their data privacy?

Individuals can protect their data privacy by being more selective about what data they share online, reading terms and conditions carefully, and advocating for stronger data protection laws.

3. How are companies like Temu misusing data?

Companies like Timu have drafted terms and conditions that could allow them to use personal data for overly broad purposes, including creating deepfakes or making individuals endorse products without their explicit consent.

4. What role does education play in data privacy?

Education is vital in raising awareness about the value of personal data and the potential risks of sharing it freely online. It can empower individuals to make more informed decisions about their digital footprints.

5. Are current laws sufficient to protect against data misuse?

Current laws are lacking when it comes to protecting individuals against the potential misuse of AI in marketing, underscoring the need for stricter regulations and better enforcement.