It’s critical to understand how Big Tech is mining, storing and profiting from your data, says UBC Okanagan Professor Dr. Wendy H. Wong, because human experiences are becoming data.
Wong is a Principal’s Research Chair and Political Science Professor interested in human rights, global governance, international relations and, especially, data rights.
“We face losses on both individual and societal levels,” she says. “Our identity and humanity are at risk. When companies gather data and build profiles about us, they steer us toward particular outcomes. This challenges our autonomy and how we’re perceived and treated by others. This isn’t just about respect but dignity and equality.”
Dr. Wong attempts to unravel many of these sticky issues in her new book, We the Data: Human Rights in the Digital Age.
Can you shed light on the role of Big Tech companies in creating data?
Data about us is co-created. Every piece of data is a collaboration between the individual’s actions or thoughts and the entity collecting it. This mutual creation complicates our ability to claim complete control. While we might want rights to “our data,” we have to recognize that this data, in its current form, didn’t even exist before the 2000s when companies began actively collecting it. Big Tech shapes our world through data collection. The pathway to empowerment is through data literacy and becoming a data stakeholder.
How does becoming a stakeholder help us?
Being a data stakeholder means understanding how data affects us and our communities. It’s realizing that we have power over AI and data technologies. We don’t have to let machines dictate our lives. By understanding data better, we can make informed decisions and have more control.
How does data affect human rights?
From an individual perspective, data collection can strip us of our identity and humanity. Algorithms categorize all of us, based on our past actions and the actions of those “like us.” They nudge us, but they assume our pasts predict our futures. But often they don’t. It’s a loss of autonomy, and in a sense, dignity. We’ve seen the effects of this in social media. People can adopt toxic behaviours when exposed to toxic content, leading to real-world harm that prevents others from acting or treating others as lesser, in simple black-and-white terms
How do you respond to the idea that AI’s emergence is unavoidable?
I’m not saying we should stop AI’s development. It’s important to know that conceding control to machines is a choice, not a conclusion. Our current challenge is that we’re attributing too much power to these machines. Consider cars: they’re faster and stronger than humans, but we never envisioned them taking over. We control them. With AI and algorithms, there’s this disconnect. These machines are human-made tools, not entities that magically appeared. It’s vital we remember that and take back control.
Why do you write that data, and not algorithms, are the larger problem?
An AI is only as good as the information it’s fed. The rise of machine learning took off once companies began hoarding data. Let’s shift our focus from algorithms and hone in on data. If we handle data correctly, we can avoid harmful outcomes. Instead of tweaking algorithms, let’s evaluate and manage the data they’re using. That way, we can preserve our human identity and uphold our values.
Isn’t the genie out of the bottle? How can we expect corporations to stop something so profitable?
I genuinely question if these major technology companies, with all their smart, highly trained personnel, can’t find a new business model if data collection becomes more stringent or costly, both financially and socio-politically. Would companies adapt if suddenly accessing loads of data came with more friction or costs? I believe so. Some suggest paying individuals for their data. I’m not a fan—it feels wrong, like selling parts of yourself. Instead, why not have users pay for services? If apps became pricier, maybe users would prioritize and only use what truly benefits them. It could naturally sift out unnecessary data-hungry apps.
How does government play a role?
It can create policies that safeguard individuals and groups while putting checks on technology companies. What gets to me is when I see efforts, like Parliament, trying to regulate AI by only consulting those who profit financially from AI development. By doing that, we’re getting their wish list of regulations rather than what might be best for society. If we don’t want machines to dominate, we have to actively make that choice. And, if AI companies are the only voices shaping the policies, it’s unlikely those regulations will truly reflect the society’s broader wishes.
Is the onus on individuals to protect their data?
Handing someone a lengthy terms and conditions document, filled with complex jargon, is like passing the buck. If everyone just agrees, why would companies change? Even if you decide to switch off all your apps, you’re one person. Imagine trying to navigate modern urban life, especially a professional one, without online tools. Our capitalist system pushes this idea of individual control, even when the broader system isn’t really in an individual’s hands. I might sound critical of capitalism, and that’s a broader issue, but when technology giants claim they can’t adjust because of profits, given their historic wealth, I find it hard to believe.
Dr. Wendy H. Wong is a Professor and Principal’s Research Chair in Political Science. In her latest book, We the Data: Human Rights in the Digital Age she explores how technology companies play a pivotal role in governing our lives by leveraging the countless amounts of personal data generated in our everyday interactions online.