The Cybercrimes Act
Using AI to create fake images that bully, shame or defame a person is a criminal offence
Artificial intelligence (“AI”) has become woven into everyday life, offering tools that can create images, generate videos, and mimic human likeness with astonishing accuracy. But with this technological advancement comes a worrying trend of the misuse of AI to bully, shame, or defame others. Increasingly, the faces of real people, often children or teenagers, are inserted into fabricated AI-generated videos or images designed to humiliate or portray them in a damaging light. This intersects with several constitutional rights, including human dignity, privacy and safety.
This article explores how the Cybercrimes Act 19 of 2020 applies to AI-driven bullying and whether AI developers could, in theory or in future law, be held legally responsible.
Bullying, AI and Human Rights Violations
Bullying is no longer confined to classrooms or playgrounds. With AI deepfake technology, students can now manipulate images or generate convincing videos that place classmates in humiliating scenarios. Deepfakes refer to convincing fake images, videos, or audio recordings created using AI. They can depict a person appearing to say or do something they never actually did, often by manipulating facial expressions or swapping one person's face or voice with another's.
Section 10 of the Constitution provides everyone with inherent dignity and the right to have their dignity respected and protected. AI-generated bullying often strikes at the very core of this right, spreading false and damaging portrayals that can follow victims for years. Deepfakes created without consent infringe on informational and personal privacy. Even when the original image was publicly shared, using it to fabricate content or falsely accuse someone of wrongdoing clearly violates the victim's privacy.
The reality is that victims may often struggle to enforce their rights as AI-generated content spreads rapidly and it is often difficult to trace it back to the original creator or author. Identifying the perpetrator becomes complex and removing the content from multiple platforms can be a daunting challenge. The challenge for South African law is then to apply existing legal frameworks to a new and evolving technological problem.
How does the Cybercrimes Act apply?
The Cybercrimes Act criminalises several forms of harmful online conduct, many of which can apply to the misuse of AI. Section 14 of the Cybercrimes Act makes it an offence to disclose a data message via electronic communication which has the intention to incite the causing of any damage to property belonging to or violence against a person or a group of people. The purpose of this section is to hold individuals accountable for digitally coordinating or promoting real-world harm, such as the illegal acts of violence or vandalism.
Section 15 of the Cybercrimes Act targets direct digital threats. It criminalises the use of a data message to threaten any person with the causing of property damage and violence against a person. This section differs from section 14 as it focuses on the direct threat made by the sender to the victim, aiming to cause fear or intimidation. This can be seen as direct cyberbullying. AI content that falsely depicts someone committing illegal acts or behaving immorally could fall within this provision.
Section 16 of the Cybercrimes Act deals with the serious offence of what is commonly known as “revenge porn” or the non-consensual sharing of intimate images. It criminalises the act of unlawful and intentional disclosure of a data message that contains an intimate image of another person without that person's consent, even if the image is fabricated. Deepfake sexual images made using AI would therefore qualify as a cybercrime, exposing the creator and distributor to criminal liability.
Any person convicted of contravening the above sections is guilty of a criminal offence and will be liable to a fine and/or imprisonment for a period not exceeding three years. Importantly, the Cybercrimes Act allows for prosecution even when the perpetrator cannot easily be identified, provided there is enough digital evidence. Specific powers are granted to police officials and investigators under the Cybercrimes Act to compel Internet Service Providers and platforms to provide information that may assist in identifying the wrongdoer.
Who can be held liable?
The individual perpetrator: The primary liability falls on the person who created, manipulated, or shared the AI-generated harmful content. In a school context, this would be the learners responsible for the harmful content. Parents may also bear civil liability if they failed to supervise minors engaging in harmful conduct.
The distributor or sharer: Anyone who forwards or reposts the content may also face liability under the Cybercrimes Act if their conduct contributes to the harm.
Can AI companies be held liable?
This is a complex question with arguments for and against holding companies and developers liable for cyberbullying and harmful AI-generated content.
On the one hand it can be argued that, like a knife or smartphone, AI tools can be used for good or bad purposes. AI systems generate vast amounts of content, making total monitoring unrealistic and developers generally cannot control every user's action. Thus, it would not be feasible to hold them liable for every misuse of their AI tools or platforms. Just like the old proverb, necessity drives innovation; the overregulation of AI tools may hinder technological innovation and progress, which is necessary to accommodate the ever-changing digital age we are currently in.
On the other hand, AI companies are aware that deepfake technology can create harmful content. If they fail to implement adequate safeguards, it could be said that they are negligent.
As AI becomes more powerful, courts may begin recognising a duty on developers to prevent foreseeable harm and require them to remove harmful content in the same manner that social media platforms are already doing. Currently, South African law does not impose direct liability on AI companies and developers simply because their tools are misused. However, the question is becoming increasingly relevant globally.
While the Cybercrimes Act provides a solid foundation for criminal liability, it does not yet address the deeper systemic question of the responsibilities of AI companies whose technologies enable such abuse. For now, legal responsibility generally lies with the individual wrongdoer. But as AI continues to transform how people interact, there is a compelling argument that may lead to the evolution of our regulatory frameworks in the future.
Did you know…Deepfakes created using AI without consent infringe on informational and personal privacy.