The illusion of Privacy – why social media is no longer a safe space for business or society

by

As of 2024, over 61% of the global population (more than 4.95 billion people) use social media. Across the African continent, the number of social media users has grown to 384 million, a rapid increase from just 90 million a decade ago (Statistica, 2024). In this age of hyperconnectivity, social media has become a foundational pillar of how we live, work, shop, learn, and even protest. Businesses use it for marketing, governments use it for outreach (or surveillance), and billions of individuals use it to maintain relationships and shape identity. It is pervasive, it is powerful, but it is also dangerously under-regulated, structurally exploitative, and fundamentally insecure.

Social media was never a safe space. It was designed to be captivating, not protective; to extract data, not preserve privacy. What began as a digital public square has quietly metastasised into a surveillance ecosystem that thrives on behavioural profiling, algorithmic manipulation, and opaque data harvesting.
And yet, too many businesses, particularly startups, still treat social media as a low-risk, high-reward marketing tool. This is not simply outdated thinking, but strategic negligence.
The question we should ask ourselves is no longer “Is social media safe?”, rather “How much longer can we pretend that it ever was?”

The Surveillance Business Model: You Are the Product

Social media’s economic engine runs on surveillance capitalism. From Facebook to TikTok, Clubhouse to X (formerly Twitter), these platforms are not free; they are funded by your data.
According to Meta’s 2023 financial reports, 97.4% of its revenue came from targeted advertising. An investigation led by Consumer Reports found that Meta collects data from over 40,000 third-party websites and apps, including from users who are not even logged in. Meanwhile TikTok is projected to earn over $18.6 billion globally from ad revenue in 2024. These companies do not monetise content they monetise users.
Every scroll, click, like, comment, and paused video becomes a datapoint, used as insight harvested by platforms not to improve your experience, but to increase your value as a product. These platforms use it to create detailed behavioural profiles that predict everything from your shopping habits to your political leanings. They then sell access to those profiles: sometimes to marketers, sometimes to political actors, and increasingly, to data brokers with little oversight. This is not incidental but part of the core business model that functions on the basis of data mining, psychographic profiling, and behavioural prediction.

Nowadays, targeted marketing has become indistinguishable from mass surveillance. Social media platforms are incentivised to know you better than you know yourself. Data brokers don’t just observe behaviour, they exploit it by selling intimate knowledge past your preferences to include information about your vulnerabilities and movements to the highest bidder. For startups and tech founders hoping to build ethical, user-centric platforms, particularly with the new cool kids on the scene of AI and machine-learning, this is practically an impossibly grim model to emulate.

The privacy paradox is stark: social media is sold to us as a community, but built as a marketplace for our data.

Consent is a Myth: The Clubhouse Case Study

Ask most users if they’ve given permission for their data to be collected and shared on social media platforms and the answer will likely be yes. Dig deeper, and will soon become clear that what passes for “consent” on social media is often coercion in disguise.

The rapid rise and fall of Clubhouse highlighted just how quickly privacy can be traded for exclusivity and hype. The audio-only social app gained traction in 2021, but behind its “invite-only” mystique lay a darker truth: the app aggressively harvested users’ contact lists to build shadow profiles of non-users, and displayed your inviter’s identity permanently on your profile without any clear opt-out or transparency. Users could not join without handing over their entire contact list, thereby granting the app access to phone numbers of people who had never consented. The shadow profiles built from this data came complete with network maps and metadata. By 2024, Clubhouse had reportedly created ghost profiles of over 6 million non-users using scraped data and contacts (EFF, 2024).

This is not indicative of innovation; put simply, it’s intrusion. Furthermore it raises serious questions under frameworks like the EU’s General Data Protection Regulation (GDPR), which would deem this a violation of fundamental privacy rights. Even so, such practices remain rampant across both legacy and emerging platforms – the German Federal Commissioner for Data Protection and Freedom of Information found in 2023 that over 27% of social media platforms also collected and stored information on individuals who had never signed up.

The broader issue? Social platforms often rely on dark patterns: design choices that manipulate users into sharing more data than they realise. “Default on” location tracking, hidden settings, confusing permissions etc. all contribute to a system where meaningful consent is impossible.

When apps collect data about individuals who have never consented, we move from privacy risk to privacy abuse.

Geolocation: Your Movements Are Metadata

More harrowingly, social media users are rarely aware that it isn’t just what you say online that’s being tracked, it’s also where you are when you say it.

A 2023 Norton report found that 45% of users were unaware that their location data was being tracked by default on social media apps. Even more troubling, a 2023 report by Mozilla found that 71% of top Android apps shared location data with advertisers or data brokers. While some of this is used for legitimate services like ride-sharing or weather updates, much of it is for behavioural profiling and ad targeting.

For African users, where privacy literacy is lower and regulatory enforcement weaker, this creates a dangerous asymmetry. From ride-hailing apps to fintech platforms, the constant capture of geolocation data could potentially expose users to physical risk, stalking, and even burglary.

This now becomes more than a privacy issue, and broaches into physical safety concerns. Whether you’re sharing a vacation photograph in real-time, or passively leaving location permissions enabled, social media platforms and their partners can continuously trace your movements. In regions with political instability, weak law enforcement, or civil unrest, that data can be weaponised, against individuals, communities, or entire ethnic groups.

Weaponised Influence and the Death of Democratic Dialogue

Social media platforms now sit at the intersection of technology and geopolitics, with their impact on elections, protest movements, and public discourse being well documented. Beyond individual concerns, social media plays an outsized role in shaping public opinion, and not always for the better. Below are just a few examples of how platforms successfully facilitate manipulation at scale:

  • In the United States, the 2016 and 2020 elections were shaped by bot-led misinformation campaigns and Cambridge Analytica-style profiling
  • During Brexit, targeted ads seeded confusion and polarisation, often using false or misleading claims
  • In Africa and South Asia, foreign disinformation campaigns and algorithmic bias have exacerbated political divides, sowed distrust in institutions, and even incited violence

Closer to home, the 2021, Nigerian government Twitter ban lasted for 222 days, ostensibly over misinformation. In reality, it was a reaction to the platform removing a tweet by the President. That single act cost the economy an estimated $1.5 billion in losses (NetBlocks). In reality, the ban was less about misinformation and more about silencing a platform that had become a tool for civil protest (#EndSARS). Similarly, Sri Lanka’s 2023 Online Safety Bill (coined the “draconian Social Media Regulation Bill”) gave authorities sweeping powers to block content and monitor citizens, drawing international condemnation. Both these events signal a dangerous trend: governments using digital platforms as both scapegoats and surveillance tools. These examples highlight a global trend: the same tools used for civic activism can be quickly turned into instruments of repression.

We can see that when platforms fail to self-regulate, runaway governments quickly step in, often with authoritarian tools cloaked in the language of “safety”. Instead of solving the root causes of online harm (opaque algorithms, lack of transparency, and poor data governance) governments opt for suppressing dissent by silencing platforms, not only undermining civilian trust, but critically stifling innovation.

The Misinformation Machine

Social media platforms are engineered to reward engagement, however nothing drives engagement like outrage. According to MIT, false news spreads 6 times faster on Twitter (now X) than true stories; platforms reward virality, not veracity. A Carnegie Endowment report showed that at least 12 African elections between 2016 and 2023 were significantly influenced by foreign disinformation campaigns, often using bot farms and sponsored content on social media.

In 2024, misinformation is no longer an outlier or by-product of social media. It’s an algorithmic norm and part of the business model. Deepfakes, AI-generated influencers, and coordinated bot farms make it harder than ever for users to separate fact from fiction. And when brands or public figures become entangled in misinformation – willingly or not – the fallout is real: reputational harm, loss of trust, and in extreme cases, regulatory penalties.

For businesses that rely on social media for customer acquisition, this is no longer just a comms issue, it becomes an operational risk.

The Corporate Blind Spot: Businesses are Still Underestimating the Risk

Despite the well-documented risks, many startups, founders and established businesses, still treat social media as a low-stakes marketing channel – this is a costly mistake. When your brand operates on platforms that are actively undermining user privacy, you inherit part of that liability, be it legally, ethically, or reputationally.

While Europe has been cracking down with heavy fines, African regulators are also waking up. With South Africa’s POPIA and Kenyan, Ghanaian and Nigerian respective Data Protection Acts, the continent is fast catching up to global standards. Failing to treat user privacy as a strategic priority today could result in financial penalties, operational disruption, loss of licence or worse, loss of consumer trust tomorrow.

As social media compliance becomes the new business imperative, ignorance will no longer be a viable defence. Businesses need to view privacy as a strategic asset, not an operational burden. Customers increasingly demand transparency and security while investors scrutinise data and compliance practices as part of ESG mandates; all with regulators watching closely.

Where Do We Go From Here? A New Privacy-First Vision for African Innovation

This is not a call for digital isolation or censorship. Social media remains a critical tool for business growth, democratic participation, and social change. But the current model is broken—and African innovators have the chance to build something better.

It’s time for a radical shift in how we think about social media, especially as founders and technology leaders.
Here are some actionable steps to take for business owners:

  • Design for privacy, not surveillance: Bake data protection into the architecture of your platforms. Adopt privacy-by-design and by-default principles, embedding privacy features from the beginning of product development
  • Challenge the extractive model: Create value with users, not from them
  • Champion transparency: Inform users clearly about how their data is collected, processed, and used. Make consent meaningful, not just a tick-box
  • Invest in digital literacy: Empower users to make informed choices about their data. Educate users by equipping them with digital literacy tools that help them navigate privacy risks; particularly young users most susceptible to manipulation
  • De-centre Big Tech: Encourage the development of African-owned, ethically-governed platforms that reflect the values of the communities they serve. Own your distribution channels where possible

…And for policy makers:

  • Strengthen enforcement capacity: Fund and train national regulators to audit, investigate, and prosecute
  • Promote open, rights-based governance: Penalise authoritarian overreach masked as regulation
  • Collaborate with the private sector: Co-design standards that work for both innovation and user rights
  • Create Pan-African Data Standards: Harmonise privacy rules across borders to protect users and encourage compliance
  • Push for Algorithmic Transparency: Require social platforms to disclose how content is promoted, suppressed, or monetised

What’s Next and How Can You Stay Protected?

Social media is not inherently bad, however its current trajectory of rewarding extraction over empathy, virality over truth, and reach over responsibility, make it unsustainable. The systems underpinning these platforms are extractive by design, and will only change once the masses demand it through regulation, innovation, and cultural resistance. As we enter an era defined by AI-generated content, decentralised digital identities, and pervasive surveillance, African innovators have a rare opportunity to rewrite the rules.

While structural change is needed, individual users can also take steps to reduce their online exposure:

  • Review your privacy settings regularly on all apps and devices
  • Disable unnecessary permissions, especially location access and microphone use
  • Avoid oversharing, especially personal details that can aid impersonation
  • Use strong, unique passwords for each platform and enable multi-factor authentication
  • Be wary of links, even from friends. Always verify unusual requests
  • Log out of social media on public or shared devices
  • Educate your peers, colleagues, and network: privacy is collective, therefore your friends’ data hygiene affects yours
  • Be mindful. Once something is online, it’s effectively public and stays on cloud servers in perpetuity, no matter how private the setting feels

The next chapter of Africa’s digital future must be built on trust. It’s up to the current generation of business owners, youth, and policy makers to make online privacy more than a right, but also a competitive advantage.

To read the original article click here: https://aphaia.co.uk/social-media-platforms-and-inherent-privacy-concerns/

Cyber threats on African subjects (2018)

Since 2015, security experts have forecasted government and commercial online services as the next frontier for illegal activity in Africa. The large gap in available data regarding cybercrime in Africa hinders effective counter measures, which is largely due to the...

Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.