One in three South Africans have never heard of AI – what this means for policy

Thirty-seven per cent of the survey respondents had never heard of AI, while 36 per cent indicated they’d heard of it but knew very little about it and the role it might already be playing in their lives.
Leah Davina Junck, University of Cape Town and Rachel Adams, University of Cape Town
More To Read
- UN counter-terrorism body partners with Kenya to combat digital terror networks
- Did AI save Google from being broken apart by regulators?
- Adobe to release free iPhone version of Premiere to rival CapCut
- Editors, reporters challenged to adopt AI but maintain human oversight
- AI in African newsrooms: Can artificial intelligence help tell Africa’s story accurately and at scale?
- AI-powered imaging technology can now detect deadly food toxins, study finds
Artificial intelligence, or AI, uses computers to perform tasks that would normally have needed human intelligence. Today AI is being put to use in many aspects of everyday life, like virtual banking assistants, health chatbots, self-driving cars, and even the recommendations you see on social media.
A new survey of over 3,000 South Africans from all walks of life asked how people feel about AI. It reveals that most South Africans can’t relate to AI in meaningful ways – despite the global hype about its pros and cons. We asked two of its authors to tell us more.
What did you find?
The research set out to capture how South Africans understand, experience and imagine AI. It aimed to provide representative insights into levels of awareness, perceptions of impact, and degrees of trust in the institutions developing and deploying AI. The aim is to help create an empirical basis for more responsive and inclusive AI governance in the country.
We found that for most South Africans (73 per cent), the term “AI” barely registers. AI increasingly plays a role in public life – often behind the scenes in areas like healthcare, credit scoring and social media moderation. But 37 per cent of the survey respondents had never heard of AI, while 36 per cent indicated they’d heard of it but knew very little about it and the role it might already be playing in their lives.
The survey also gives us a sense of why awareness remains so low. Most information comes through social media. Only 4 per cent learn about AI through formal education, and a meagre 2 per cent through their workplaces or professional training.
What also stands out is uncertainty. While nearly 47 per cent of people felt AI’s social impact was largely positive, 40 per cent had no clear leaning either way. So while AI is becoming more influential, it does not seem to be visible or real enough in everyday life for many to form solid opinions.
Economic threat is a central concern: people have worries about being replaced or devalued by machines, or targeted by scams.
But trust in both government and big tech is measured and pragmatic. It’s hoped that big tech will help provide connectivity and jobs. The government is seen as most trustworthy when it comes to using AI in areas like health and education.
Yet, these are the very areas where unease surfaced. Respondents called for lines to be drawn with unsupervised, AI-driven care tasks. They felt that learning based on human experience should be preserved. Social media, while a key source of AI-related information, is also a site of worry, especially around data privacy and children’s exposure to harmful content. People felt there should be guardrails and human oversight.
Looking ahead at the next 10 years, respondents said they hoped AI would help create a better future, especially in health and job creation.
How was the survey conducted?
Since 2003, the Human Sciences Research Council has been capturing how South Africans experience social change through the annual National Social Attitudes Survey. This time, the think tank Global Centre on AI Governance contributed an AI-specific component to the survey through its project The African Observatory on Responsible AI, funded by the AI4D programme of the International Development Research Centre of Canada and the Foreign and Commonwealth Development Office, UK.
Trained fieldworkers surveyed a diverse cross-section of South Africans, covering all nine provinces, both rural and urban areas. They interviewed people over the age of 16 across a range of socio-economic backgrounds and in their preferred official language. Over 3,000 people were interviewed.
The survey included both structured and open-ended questions, asking how people learnt about these technologies, how they felt about their impact, and the degree of trust they placed in different institutions using them.
The findings offer rare evidence into the social views shaping the ways AI may be taken up or contested, and how public opinion might start to inform decisions about how technology is shaped and used.
What can we learn from these findings?
The survey shows how difficult it is to get to grips with a technology like AI in a country where there is a stark digital divide. Access to information is uneven, trust in institutions is limited, and there isn’t a shared language to understand or question AI use. For many, AI remains largely opaque and abstract.
This matters because a lack of basic knowledge prevents meaningful public debate about AI.
Uncertainty and lack of information open the door for hype, misinformation, and even exploitation. There’s a danger that fears about AI replacing human skills and jobs will overshadow more optimistic views of its possible benefits.
Still, there’s a cautious hope that AI can improve livelihoods and access to information. Concerns about technology are less about it taking over and more about how to use it or even just access it.
This is a crucial moment because public opinion about AI is still developing. Policy-makers and tech leaders across sectors have an opportunity to define AI’s use and value from a people-centred perspective.
What needs to be done about this?
To bridge the knowledge gaps and address uneven access to information, AI literacy needs to be established on a common understanding. For example, in India, the Indian Institute of Technology has launched a free online training course on all aspects of AI for teachers, so that they can pass knowledge on to their students.
AI literacy efforts should be built on shared language and rooted in daily concerns and aspirations, allowing people to relate AI to their personal experiences.
Companies must invest and build AI in collaboration with local communities. Civil society organisations and researchers have a vital role to play in raising awareness, tracking harms and ringing alarm bells when accountability in AI use is sidestepped.
Public projects can help educate and inform South Africans about AI. For example, the University of the Western Cape partnered with a theatre company and a high school to create The Final Spring, a play about a robot. Storytelling can help translate complex ideas about technology into accessible, culturally resonant forms of AI literacy.

The Conversation
Leah Davina Junck, Honorary Research Fellow of The Ethics Lab, University of Cape Town and Rachel Adams, Honorary Research Fellow of The Ethics Lab, University of Cape Town
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Top Stories Today