Defining Safe and Ethical Principles for Digital Humans

Guildhawk | Feb 9, 2024 4:05:18 PM

They greet you with a friendly smile, answer your questions in a human-like voice, and even provide emotional support. Digital humans, once confined to science fiction, are increasingly becoming part of our daily lives, from customer service chatbots to virtual assistants and AI companions.

While these technologies offer exciting possibilities, their rapid development raises crucial questions about safety and ethics.

How can we ensure these digital entities interact with us in a responsible and trustworthy manner?

The Rise of Digital Humans

Imagine interacting with a customer service representative who can understand your frustration and adjust their tone accordingly, or a therapist who provides personalized support without judgment. These are just a few examples of how digital humans are transforming various fields.

Statistics paint a clear picture of their growing presence:

  • 40% of companies are already using or plan to use chatbots by 2022 (Gartner).
  • The virtual assistant market is expected to reach a staggering $14.10 billion by 2030 (Grand View Research).

These advancements offer undeniable benefits for customers like improved accessibility, 24/7 availability, and personalised experiences. Similarly, businesses and public services that are investing into new solutions powered by Generative AI want to see measurable results like increases in productivity and customer loyalty.

Discover the AI disruptions coming in 2024.

However, amidst all the excitement about AI clones, concerns about risks emerge.

Ethical Concerns and Potential Risks

While digital humans hold immense promise, their development warrants careful consideration of potential risks:

  • Manipulation and Deception: Malicious actors have already succeeded in unlawfully creating digital clones of humans and synthesised their voices to commit fraud and circulate misinformation online.  A startling case in Hong Kong was so successful that a victim of fraud was deceived into authorising the transfer of HK$ 200m after believing the digital humans on a Zoom video call were the real humans. Now imagine a deepfake video of a respected politician using the same technique to spread false information or convincing someone to reveal sensitive data. Such malicious uses could have damaging societal consequences.
  • Bias and Errors: Just like any AI system, digital humans can perpetuate existing societal biases if not carefully designed and monitored. Imagine a multilingual chatbot being trained on bad data and using discriminatory language or a virtual assistant giving product information that is wrong. Such scenarios, though unintentional, could result in serious harm to an organisation and its customers.
  • Privacy and Data Security: Digital humans often collect and process user data, raising privacy concerns. How this data is stored, used, and protected needs close scrutiny to ensure user trust and avoid misuse. Statistics highlight these concerns:
     
    • 80% of people are concerned about AI bias (Pew Research Center).
    • 70% of consumers are worried about companies using their data without permission (PwC).
  • Job Displacement: Automation through digital humans raises concerns about potential job displacement in certain sectors. While new opportunities may emerge, ensuring a smooth transition and supporting potentially impacted individuals is crucial.

See how global businesses use Multilingual Digital Humans.

Guildhawk Avatar SafeHouse™

As we continue to advance in digital avatar technology, it's crucial to establish safe and ethical principles to ensure that these virtual beings are not only accurate but also secure. One of the biggest concerns is the risk of mistranslation or unsafe scripts, which can lead to embarrassing or even harmful outcomes.

While tools like Google Translate and OpenAI can manage easy translations, they are not always reliable and can show bias or label people incorrectly. This is why professional organisations require ironclad guarantees that translated avatar scripts are perfect and kept private.

To address this need, we have created the Avatar SafeHouse™, a secure location where Digital Humans and translated scripts are kept safe to guarantee privacy. Backed by a total of 131 ISO:27001 certified data security controls, this creates the ultimate protection for avatars and synthesised voices.

Additionally, our secure Generative AI translation software, GAI, can be used to create high-quality translations that are private and can be integrated with an API.

By prioritising safety and ethics in the development of digital humans, we ensure that these virtual beings are not only accurate but also trustworthy and beneficial to society.

Towards Safe and Ethical Digital Humans

International initiatives are underway to address concerns about AI and establish ethical frameworks for AI development. Organisations like the European Commission, the OECD, and the Partnership on AI are leading the charge.

However, there are good practices available now that will protect and future-proof digital humans.

Here are some key principles for safe and ethical digital humans:

  • Human Oversight: Humans should maintain control and oversight over digital humans. This ensures that digital entities remain tools for good and don't develop unintended consequences.
  • Privacy and Data Protection: User data should be collected, stored, and used ethically and securely. Robust data protection measures and user consent are essential to ensure trust and privacy.
  • Transparency: Users deserve to know the nature and limitations of their interactions with digital entities. They should be informed about how decisions are made and what data is collected.
  • Accountability: Developers and deployers must be responsible for the actions and impact of digital humans. Clear lines of accountability are essential to ensure proper oversight and address potential harms.
  • Fairness and Non-discrimination: Digital humans should be designed and trained to avoid bias and discrimination based on factors like race, gender, or socioeconomic status. Rigorous testing and monitoring are crucial to prevent perpetuating existing societal inequalities.

Implementing these good practices now helps organisations harness the power of multilingual digital humans in an ethical, responsible, and productive way. Learn more about how Guildhawk help global organisations use avatars in an ethical way to improve safety and learning.

See how Sandvik Canada makes training multilingual with Guildhawk AI.