Since the 2016 US election, there has been a growing public concern
and debate about the effect and the role that platforms like
Facebook and YouTube are having on the integrity of
our democracy and on the character of our society.
It used to be if you wanted to reach a large audience,
you needed to buy an ad with a broadcaster, or a publication,
that claimed to reach the type of group you wanted to target.
So, if you wanted to reach upper-middle-class people in Manhattan,
you would buy ads in The New York Times.
That was the granularity with which you could target people.
What Facebook offers is of entirely different order of magnitude of specificity.
You can specify any number of hundreds of attributes of people and locations
that you might want to target — and go directly to them.
This is a particular problem during elections,
which is the one time in democratic society where we have very strict rules on
who can say what, and who can purchase which audience.
Why I think this is such a clarifying moment for how we view
these platform companies is that they have created an architecture
that is in breach of those rules.
On Facebook, you don’t know who is buying access to Canadian eyeballs.
They don’t disclose that — we don’t know who is spending what money,
from where in the world, and to which audience.
The problem is, because governments have had a hands-off approach
to the technology sector, we are leaving responsibility for these
governance challenges to the companies themselves.
There are two primary ways that governments have attempted to control
the problem of misinformation that our society seems to be plagued by right now.
One is by controlling what is acceptable speech in their society.
Governments have always regulated speech to a certain degree, but some countries,
such as Germany and France, are taking it upon themselves to
penalize platform companies who do not abide by those very strict rules
of what is acceptable speech.
This runs into a whole host of problems because governments are increasingly
having to determine — and be the judge and the adjudicator of
what is acceptable speech and not. 
The other main way that governments are dealing with this problem is
actually by enabling the rights of their citizens to control their own data.
So instead of limiting the rights of their citizens to speak,
they are enabling the rights of their citizens to have ownership and control over
the data that they produce.
And I think the most sophisticated example of this, at the moment,
is the new European General Data Protection Regulation.
What it essentially says is that: “I as an individual citizen, if I live in the EU,
have a right to know if data is being collected about me;
if I opt out of that data being collected about me,
the company that was collecting it can’t deny me services;
and the company has to provide a clear indication about how that
data is being used by them.”
And this I think this fundamentally changes the power relationship between
citizens who are providing the data, and companies that are taking it and using it.
Given the influence of digital technology and the internet
on our economy and our society,
it’s impossible for me to imagine our government not having a national data strategy.
But having one is going to mean taking very seriously the way in which
data about Canadians is used,
the way in which our realities are increasingly shaped by digital technology,
and the way in which our economy is structured around
what I think is a very pernicious and monopolistic digital architecture.
