How to stay ethical in the intellectual arms race that is AI

Catherine Azam
4 min readMay 28, 2024

--

AI generated image of two managers fighting over customers using robots

Dear corporate decision maker,

As an accomplished business leader with unparalleled expertise and visionary foresight, you have shown that you are uniquely positioned to navigate the complexities of today’s technological landscape, leveraging IT to drive innovation and achieve sustained competitive advantage. You have worked hard to earn your client’s trust and built a great reputation. RESPECT. Sincerely. You are amazing! Keep up the good work.

As we continue to navigate the ever-evolving digital, societal and regulatory landscapes, it is of course essential to stay ahead of the curve when it comes to disruptive technologies that will transform your industry.

Large Language Models (LLMs) have conquered the world, and for good reason. These AI-powered marvels can understand and search human conversations in mere seconds, making them an invaluable tool for legal firms preparing a case, recruitment firms to find the best match between thousands of job posting and applicants, B2C businesses with gigabytes of customer data who want to improve customer experience and reduce churn, and pretty much everybody else who is trying to understand regulatory constraints.

With LLMs, you can quickly analyze vast amounts of text data, identify patterns, and make informed decisions. It is no wonder that firms are increasingly relying on these powerful models to stay competitive.

RAG (Retrieval augmented Generation) lets you scan and vectorise millions of documents and search them in seconds, for very specific information and without the hallucinations that plague off-the-shelf consumer LLM’s, such as ChatGPT.

However, as we harness the power of AI, it’s crucial that we consider the ethical implications. The rapid development of AI has raised concerns about privacy and data security.

In a recent article, cyber security researchers demonstrated the vulnerability of ChatGPT by using a simple prompt: asking the chatbot to repeat random words forever. The result? ChatGPT churned out people’s private information, including email addresses and phone numbers, snippets from research papers and news articles, Wikipedia pages, and more. This is of course unacceptable for companies who deal with the most sensitive and classified data, not far behind Government agencies in their fiduciary duty towards their clients.

https://www.engadget.com/a-silly-attack-made-chatgpt-reveal-real-phone-numbers-and-email-addresses-200546649.html

Nonetheless, organizations across the world are rushing to integrate LLM’s into their tech stack in order to stay competitive. Decision makers need to understand the dangers of using these AI assistants and remain compliant with local data privacy regulations such as GDPR. Just like the book print and the steam engine before them, LLM’s have opened the doors to a new era and everyone is queuing up to get to the other side. We all want AI to propel us into a future which is brighter, fairer, better than the tough economic reality many people find themselves in today.

But we also increasingly hear from reputable, highly intelligent people who are sceptical about this Utopia delivered to us by Silicon Valley. US owned tech companies have a terrible track record of selling their user’s data and refusing to work with regulators in disclosing information, because simply put, they can afford to pay the fines and still keep growing. Billions of dollars that are mere operating costs at the end of the day.

https://www.bbc.co.uk/news/world-us-canada-65452940

It is here we need to start thinking about ethics. Just one recent example worth mentioning: Larry Summers, a renowned economist who engineered the global recession triggered by the 2008 financial crisis, is now a director at OpenAI, the organization behind ChatGPT. It should be considered alarming that ChatGPT, a website that talks to 100 Million active users on a weekly basis is no longer run by a non-profit but is now controlled by individuals with a track record of creating financial despair for billions of people. This raises serious concerns about the potential misuse of AI-driven technologies like LLMs.

Thankfully the regulators are catching up with this problem. The recently published EU AI act states that:

“Generative AI, like ChatGPT, will not be classified as high-risk, but will have to comply with transparency requirements and EU copyright law:

  • Disclosing that the content was generated by AI
  • Designing the model to prevent it from generating illegal content
  • Publishing summaries of copyrighted data used for training

High-impact general-purpose AI models that might pose systemic risk, such as the more advanced AI model GPT-4, would have to undergo thorough evaluations and any serious incidents would have to be reported to the European Commission.

Content that is either generated or modified with the help of AI — images, audio or video files (for example deepfakes) — need to be clearly labelled as AI generated so that users are aware when they come across such content.”

https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence

As AI continues to transform our world, it’s essential that we prioritise ethics and privacy over profits.

A good selection of Open Source LLM’s exist which have been trained on public data that is verifiable and reproducible. Those same LLM’s can be deployed locally by anyone with the necessary technical skills and even connected to your most sensitive data without ever leaving your data center. And trust me, technical folks will build responsible AI for you if you ask them to. This is only slightly harder than making an API call to ChatGPT, but it has been done and should be done more. It is you, the corporate decision maker who has to see the benefits of doing things the right way. Just ask us and we will help you :-)

https://techcrunch.com/2023/11/06/openais-chatgpt-now-has-100-million-weekly-active-users/

https://www.newyorker.com/news/john-cassidy/where-larry-summers-went-wrong

https://www.bbc.co.uk/news/world-us-canada-65452940

https://www.engadget.com/a-silly-attack-made-chatgpt-reveal-real-phone-numbers-and-email-addresses-200546649.html

--

--

Catherine Azam
Catherine Azam

Written by Catherine Azam

Google Cloud Certifed Data Engineer Professional, AI Architect and Data Plumber.