Categories
Cybersecurity Digital Economy

When the System Becomes the Attacker: Automated Processes, Digital Identity, and the Humans Left Behind

In February 2026, my phones were stolen. That was the first violation.

What followed was something far stranger: one of my major online accounts was compromised. It became a months-long, unexpected ordeal in which an automated system repeatedly blocked my attempts to reclaim my own digital identity.

It was a second, quieter, and in some ways more sneaky, violation. One carried out not by criminals, but by the very infrastructure that should have been designed to protect me.

Indeed, the experience forced me to think more deeply about digital identity, automation, and the human consequences of poorly designed recovery systems.

My long-standing name on the account was replaced with an entirely different identity, and my profile photograph was changed from that of a Black man to that of a white woman.

Let that sink in.

I am writing this not merely as someone who experienced it, but as a data protection advocate, a technology policy advisor, and someone who spends a significant portion of his professional life arguing for responsible, human-centred digital systems across Africa.

I’m deeply involved in this, both personally and professionally.


What Happened

The thieves who took my phones also accessed a major social media account associated with them. They made deliberate changes to impersonate a different person: they changed my name to a completely different identity and replaced my profile photograph, a Black man with that of a white woman.

Clearly, the visible evidence of compromise was not subtle. The account had been repainted as someone else entirely.

I eventually regained access through my registered email address and immediately attempted to correct these changes. Instead, I was prevented from doing so, and my account was suspended, citing integrity concerns.

Rather than ending there, a second and more exhausting phase began: trying to convince automated systems that I was still myself.

From that point forward, I entered what I can only describe as a  complicated and bizarre loop. Appeals were submitted. Confirmations were received. Then notifications arrived saying the appeal was incomplete. The account lock was maintained.

Identity documents were submitted and rejected. Because they did not match “the account information.” The account information, of course, was the attacker’s version of my identity, not my own. Can you imagine??

Over the following months, I encountered a cycle of contradictory messages, failed verification attempts, continued account restrictions, and identity checks that appeared unable to distinguish between my genuine identity and the one inserted by the attackers.

Each notification, each layer of procedural barrier between me and an account I had held for years, appeared unbelievable.

The experience revealed something fundamental about the limitations of automated trust and recovery systems.


The Core Failure: When Automation Adopts the Attacker’s Frame

Here is what I believe happened. And it certainly matters beyond my individual case.

Automated identity verification systems are built on a simple premise: compare the user’s claimed identity against the identity on record. Flag mismatches. Reject discrepancies.

This logic only works when the record itself is accurate.

But what happens when the record itself has been tampered with?

In my case, the system appears to have treated the attacker’s version of my account, which was a completely different name and a photograph that looks nothing like me, as the authoritative reference.

My legal name, as shown on my Nigerian National Identification Number (NIN), did not match “the account.”

Of course it did not. The account no longer reflected my identity.

The system could not reason about several critical factors:

  • That “Jide” is a widely recognized abbreviation of “Babajide”, a standard Yoruba naming convention
  • That a profile photograph changing from a Black man to a white woman is clear, visible evidence of compromise
  • That the sequence of changes (name, photo, and access from unfamiliar devices) aligns with account hijacking, not legitimate activity

Because it could not interpret these signals, the system defaulted to the record.

And the record was the attacker’s.

In effect, the automation validated the criminal’s actions and penalized the victim for attempting to undo them.


This Is a Structural Problem, Not an Isolated Incident

It is understandable that digital platforms rely heavily on automation. Billions of users cannot be managed through manual review alone.

The risk, however, is that systems lose context.

This pattern is predictable. It is repeatable. And it is a known failure mode. This is particularly in contexts where:

1. Names Do Not Conform to Western Conventions

Many systems assume English-language naming structures: a given name, a family name, and a limited set of nicknames.

For hundreds of millions of people across Africa, South Asia, Southeast Asia, and the Arab world, this assumption does not hold.  You claim your platforms are global, yet they fail to support the naming diversity of the very populations they serve.

“Babajide” becoming “Jide” is not an anomaly. It is standard Yoruba practice. It is no different from “William” becoming “Will” or “Bill.”

A system that cannot handle this is not neutral. It has a cultural blind spot.

2. The User Base Is Diverse

A profile photo changing from a Black man to a white woman is not ambiguous.

A human reviewer would question it immediately.

Similarly, changing a name that has been Yoruba Nigerian since the account’s inception to a purely Western name should raise immediate red flags.

Automated systems cannot pause for every case. But they must be designed to escalate when signals are this strong.

3. The Appeals Process Is Also Automated

This was the most damaging aspect of my experience.

The appeals system relied on the same corrupted data as the original decision. There was also no meaningful way for me to provide context beyond rigid automated inputs.

At no point did a human appear to evaluate the full picture.

The system asked only one question:

Does the submitted ID match the account name?

It did not.

The result was both deeply and amazingly paradoxical:

  • Unauthorized changes gained procedural legitimacy
  • The legitimate account owner became trapped in repeated verification loops

In identity theft cases, recovery processes can amplify harm. That is a serious design failure.


What This Means for Data Protection and Digital Rights

I work in data protection advocacy. I have helped build capacity for data protection frameworks in Nigeria, including developing the nation’s first Standard Certification Program for Data Protection Officers.

I am fluent in the language of data subject rights: access, rectification, and the right to accurate representation.

What I experienced was a system that denied rectification of corrupted data.

It treated the attacker’s changes as authoritative and my legitimate identity as suspicious.

This is not just a customer service issue. It is a data protection failure.

It raises a critical question:

What obligations do platforms have to design systems that can handle account compromise without reinforcing it?

And more urgently:

How do we ensure automation does not inadvertently legitimize criminal actions?


The Human Cost That Metrics Miss

This part is harder to quantify. But it is about realizing that while technology matters, it is ultimately, and always, about the people.

For, every automated notification I received, whether contradictory, shifting, or impersonal, the implicit message was:

The system does not see you.

After the initial violation, the prolonged inability to restore my identity became a second layer of harm. It became, a “double wahala,” as we say here.

Each failed attempt reinforced the sense of displacement from my own digital presence. An outsider to my own digital identity.

This is not just a technical hurdle.

It is a deep human experience.

Platform metrics like resolution rates, processing times do not capture this.

What do those metrics tell you about the mental stress of losing a digital identity you’ve nurtured for years?

There is a real human cost in being forced to spend months proving your existence to a platform that no longer recognizes you.

This is not merely technical barrier. Platform metrics simply cannot capture the mental stress nor the indignity of the situation. For just simply trying to prove that you are yourself??


What Needs to Change

Automation is necessary. But automation without:

  • human escalation routes
  • cultural understanding and competence
  • accountability mechanisms

is not neutral. It is risky.

We need:

  • Mandatory human review triggers for high-signal compromise cases
  • Culturally aware identity systems that reflect global naming realities
  • Genuine appeals processes involving human judgment
  • Transparency and accountability when systems fail

Resilience Must Include Recovery

Digital resilience is not just about preventing attacks.

It must include:

  • fair recovery mechanisms
  • human-centred escalation
  • contextual reasoning
  • restoration of dignity

A system that detects attacks but cannot restore victims is not fully resilient or trusted. It is only partially resilient.


Why I Am Writing This

I am still pursuing resolution. I expect to succeed.

But this is not just personal.

It is systemic.

Across Africa and the Global South, many people are navigating similar failures, where legitimate identities are judged against corrupted records.

As digital systems expand, identity recovery becomes a governance issue. We must never accept that effectively punishing the victim by keeping them locked out of their own digital life in the name of security is acceptable or justifiable.

We must ask:

  • When should human review be mandatory?
  • How should systems detect identity manipulation?
  • What safeguards exist when automation fails?

Most importantly:

How do we ensure people are not reduced to whatever a database says they are?


Closing

These are no longer abstract questions.

They shape access, participation, and trust in the digital society.

People deserve systems that see them, that serve them.

Systems that can correct themselves.

Systems that restore, not deepen harm.


If this resonates with you, whether as a user, policymaker, or practitioner, I would welcome the conversation. Share this post and let us make this a louder discussion.

 


Author: Jide Awe

Science, Technology and Innovation policy advisor.

Nigeria’s Inaugural Tech Mentor of the Year

Find him on Linkedin Jide Awe on LinkedIn

Find him on Threads @iamjidaw on threads

Find him on Twitter @jidaw

Cybercrime in Nigeria – The Foreign Factor Driving Digital Crime

Cybersecurity