I wanted to start this talk with a little conversation with you, Rachel.
Okay.
I have a simple question. Do you trust me?
Yes.
Why?
Because I have experience with you and you're
my friend and I think I know you enough to trust what you have to say and what you do.
[LAUGHTER] You think you know me enough to have trust in what I say and what I do.
This is the core of what I want to talk about,
this idea that we can trust one another.
Now that I've asked you that awkward question,
I can switch over to the slides.
When I was preparing for this talk,
which actually started back in October last year,
I was pondering this idea of whether someone
can call themselves a professional or a web professional,
or if they're just web practitioners.
I went down this rabbit hole of trying to understand what exactly
constitutes a professional as opposed to a practitioner.
Can we as web people,
the people who work on the web,
the people who build the web actually call ourselves professionals?
I came across this definition,
which is from the trusted source,
Wikipedia [LAUGHTER] citation needed,
that says for a type of work to be called a profession,
you need to have a bunch of requirements like
an occupation becomes a full-time occupation.
The establishment of a training school,
the establishment of a university school,
the establishment of a local association,
the establishment of a national association for
professional ethics and establishment of state licensing loss.
What you see here is the definition of what a professional
is because once you start talking about professionals as in lawyers, doctors,
engineers, all that stuff,
what you're looking at is a person who not only went
through the proper training to do the job they are doing,
but as person who is held accountable for their actions and held accountable
to their own internal ethics and to their professional ethics.
I want to talk to you about trust.
Trust is, according to Rachel Botsman,
who is the Trust Fellow at Oxford University the currency of our interactions.
The reason why I trust Rachel,
and the reason why Rachel trusts me is because we've exchanged trust currency over time.
Establishing this bartering system where I do things
for Rachel and she sees that I'm doing it the right
way and then she does things for me and I see that she's doing it the right way,
and overtime, we build up this bank of trust towards each other.
The interesting thing is when you're looking at family or
friends or close acquaintances or maybe even coworkers,
is relatively easy to build up
this trust currency because you interact with the people all the time.
But the people we trust the most in the world are actually people we don't interact with,
if you want an example of that just consider,
if you have a car or a vehicle of any kind,
you are trusting the people who built that to do it properly.
If you are in a vehicle are traveling on a road,
you're trusting the people who built the road to have done it properly.
If you are going through a tunnel or going over a bridge,
you're trusting the engineers who designed and did
all the math for these things to have done the right thing.
More importantly, if something goes wrong,
say the tunnel collapses,
the bridge collapses, or anything else,
you know that the people that didn't do their job properly will be held
accountable because of the ethics that they adhere to.
This all became really important to my wife and I almost exactly four years ago now.
Our son was born six weeks premature,
and instead of the birth we had planned,
there was this perfect illusion of what things should be.
His first few hours involved me running through the entire hospital with
him in an incubator and a
barrage of tests and IVs and all sorts
of medical interventions just to make sure that he could stay alive.
My memory of that time is really foggy
because it was extremely intense and it went on for weeks.
But there's one particular memory that stands out to me and that is,
a couple of hours after he was born,
the head of the NICU,
the neonatal intensive care unit,
came to my wife and I and satisfy down and said, "Everything is fine.
We will do everything we can to give your son the capability to live a full life."
Then he looked at us, it was a woman.
She looked at us to see what we would respond with and my wife said,
"Do what you think is right,
we trust you completely."
This is important because,
this is how we act in the world.
We lend our trusts to people we don't know because we know
that all their behavior is packaged in ethics.
Today I want to talk to you about practical ethics for the modern web designer.
The reason is, when we look at how we interact in the world,
trust is becoming more and more important and trust us also eroding more and more.
Now the most high-stakes trust interactions you can
ever do as a person is your interactions with medical professionals.
If you go to the hospital and you are violently
ill or if you go to a hospital and your family members,
you're essentially going to a building and with people inside and saying,
"Hello, I give you my life,
do with it what you think is best I trust you completely."
One of my friends had
severe heart murmur at one point and she asked me to go over there to the hospital.
The doctor actually told her the best way to solve this is to
turn your heart off and on again like a computer.
Just imagine the trust necessary towards this person who you've never met before to say,
"Sure, you can turn off my body. It's okay.
I'll allow you to turn my body off and then turn it back on again,
and I trust that this will be okay."
Today because of what's happening in the world,
these trusting tractions are becoming more and more important because we are
now surrounded by this pandemic that is very hard to understand,
even for the people who worked with it on a day-to-day basis.
We're seeing that the trust in society
towards the people who are handling the situation is eroding.
Not because they are doing a bad job,
not because they deserve to be questioned,
but because information is so fraught with politicization that we are no
longer able to know what the information
provided to us is and whether it's accurate and whether we can trust it.
A lot of that relates to what we do as web practitioners.
Why do we trust perfect strangers?
There's a very simple answer to this when it comes to
professionals we trust them because of ethics.
The reason is, in a professional setting,
ethics provide that necessary accountability for us to say,
I know you will do the right thing and I also know if you don't,
there will be consequences for you.
It's this ethics that leads to accountability and leads to
trust and you can see this in their interactions with the medical profession.
Because anytime we talk about doctors,
we always talk about this Hippocratic Oath,
I will abstain from all intentional wrongdoing
and harm this is called the do no harm clause.
It's interesting because the current Hippocratic Oath doesn't actually include
the term harm at all because harm is undefinable thing that they have omitted it.
But the idea in society is that doctors and
other medical professionals adhere to this principle of doing no harm.
That becomes really interesting when you then talk to doctors about your own work.
When we were in the NICU,
one of the nurses I think she overheard me talking to
a colleague in my computer while I was sitting in the hallway.
She came and asked me, "You know,
a lot about the internet.
How do social media companies make money?"
I looked at her and I was like,
"Imagine at the other end of your Internet connection,
there's a small blue monster that says,
;Come here data cow,
my algorithms will find all your biases and stock them to make you
addicted to my service and I will turn that addiction into money.'"
She was not very impressed to my monster impersonation reason,
but she did understand what I was trying to say,
which is what happens on the Internet is
not covered by the same type of ethics that everything else is.
When I'm sitting there in an interaction with her,
I can tell her I trust you completely because I know you have the educational background
and you have the ethics background to know
what is right or wrong and always do the right thing.
If for some reason something goes wrong,
there's a system in place to handle that situation.
But when you come to me with your private data,
with your health data,
with your banking data,
with your private information,
you have no guarantee that the people who built
the tools to handle that information are going to do the right thing,
and also there's also no accountability built into the system at all.
The trust that we've been costing on as web practitioners for the past,
what, 30 years has starting to erode badly.
The reason is the people we're building tools for have started
realizing that we can't actually be trusted,
or rather there is no reason for them to trust
us because there is no system in place to protect them.
What happens when there is no system in place is they go to
their elected officials in government and say,
"Hey, you need to deal with this."
Those elected officials go to the lobbyists
for the big corporations who want to control the Internet and say,
"How do we fix this?"
The corporation say, "Give us all the power and we will fix it for you."
We have work to do as a community to make sure that we retain control of our work,
and that we hold ourselves to an ethical standard so that we
can be accountable for our own actions and people can trust us.
This talks about ethics,
and I want to give you an overview of
what ethics is before we dive into the technicalities.
Here's the Moral Philosophy 101, Pandemic Edition.
What do we talk about when we talk about ethics?
There are two terms that we use interchangeably all the time,
morals and ethics, and they go together,
but they're actually quite different things.
Morals are the personal internal judgments
about goodness and rightness that we have ourselves.
Whereas ethics are the commonly agreed upon tools and
frameworks we use to judge the goodness and rightness of acts as a society.
Morals are personal, internal, individual.
Ethics are external, community-based, and societal.
Ethics is often misunderstood as a list of rules and regulations,
and dos and don'ts, or ways of mitigating risk or avoiding legal issues.
You can use ethics to formulate all these things,
but that is not what ethics is at all.
Ethics are not universal,
they're framed and defined by the communities they apply to,
and influenced by the morality within that community.
We see this right now in all these discussions about masking.
Different people have different attitudes towards masking,
some see it as a moral responsibility towards the community,
others see, it's a moral responsibility towards self-preservation,
and some seeing masking as taking away of
individual rights and an imposition of individual rights unto the people.
As a result of this,
we see different types of ethics established themselves in different societies.
In some regions, masking is recommended and
the vast majority wear masks to protect other people.
The ethics in those regions say,
"We have an individual responsibility towards one another,
and each of us must choose how we want to act on that responsibility."
In other regions, masking is mandatory.
The ethics of those regions say,
society has an absolute obligation to protect everyone,
and individuals must act on that responsibility.
In some regions, masking is voluntary and there are no official recommendations.
The ethics in those regions put personal freedom first,
and sees the imposition of masking onto the populace as unethical.
This is why ethics is complicated,
and this is why ethics gets really confusing when we talk about it.
Ethics is not this higher-order,
everything we know exactly what is right or wrong.
It's actually a communal practice to reflect the morals and values of the members.
That also means we can establish that communal practice within our community.
This is what moral philosophy tries to do.
Moral philosophy is the science of ethics,
and this the quandary it's been trying to solve for the past millennia.
How do we as individual moral actors establish
and agree upon ethical frameworks for our conduct in society,
and how do we judge the goodness and rightness of acts in a consistent way?
How do we know what is right and what is wrong?
The answer that moral philosophers have come up with,
actually it's the same answer today as it was about 3,
4, 5,000 years ago.
Ethics is a practice,
it is not a thing that you can see and do,
it's actually a way of living.
Ethics is also about being and doing.
How do we be and do things in an ethical way as the web community?
I think the answer to that lies in rediscovering what
the web was all about and then route our practice in those principles.
At its core, the web was always about empowering
people by making the publishing, retrieval, sharing,
and linking of information as
accessible and low barrier as possible to anyone with an Internet connection.
Web design and development is really granting, enabling,
and enhancing people's capabilities to share their thoughts,
and ideas, and creations with the world.
Web design is really capability-centered design.
When I say capability-centered.
The reason for that is human-centered design for all its greatness has
a tendency of seeing humans as instruments used by the design to accomplish a goal.
So the designer measures the success of a design based on whether
the design can manipulate the human into completing a task of some kind.
If that happens, if the human is able to
do that through heuristics and through all these other things,
then it's a good design.
The problem is it doesn't actually consider what happens to the user, is it useful,
practical, good for that person to do the thing,
or is it just benefiting the person who designed it?
Capability-centered design is human-centered design plus ethics.
The reason this is important is because design is political.
Now, I know every time I say design is political,
people freak out and they're like,
"Well, stop putting politics into everything."
Let me just be clear on this,
design as a political thing has been discussed for hundreds of years.
It is not a new thing,
and it's a well-established truth within the design community.
You can see it in writings about design all the time. Here's David A.
Banks from 2018, "Engineers need to think
of their work as both a humble contribution to the ongoing social order,
but also as an imposition,
as a normative statement with politics and consequences."
Here's Mike Monteiro from 2015,
"If a thing is designed to kill you,
it is by definition, a bad design.
You are responsible for what you put into the world and how it affects the world."
Here's Mario Bunge from 1975,
the article called Towards a Technoethics,
"A technologist must be held not only technically,
but also morally responsible for whatever he designs or executes.
Not only should his artifacts be optimally efficient,
but far from being harmful,
they should be beneficial,
and not only in the short term,
but also in the long-term."
Here in 1962, the First Things First manifesto was published.
It says, "Designers who devote their efforts primarily to advertising, marketing,
and brand development are supporting and implicitly endorsing a mental environment so
saturated with commercial messages that it is changing
the very way citizen-consumers speak,
think, feel, respond, and interact.
To some extent, we're all helping draft
a reductive and immeasurably harmful code of public discourse."
Take this statement and apply it to social media discourse today,
and you'll see that in 1962,
designers like us warned about the very things that designers like us build.
As designers, we make decisions for other people. That's what we do.
That's what design is,
constantly making decisions for other people and deciding what they can and cannot do.
Because of that, we need to make sure we do it right.
Take a deep breath, take everything you know,
and everything you've ever heard about design and technology ethics,
put them in a box, close the box, and throw it away.
I want to start with a completely clean slate here
to introduce you to capability-centered design.
This is a term I came up with,
so you can take it for what it is.
Capability-centered design takes the idea of ethics and splits it into two parts.
Being, to make ethical reflection part of your daily personal and professional practice,
learning how to think about things through the lens of ethics and making
ethical practice part of who you are as
a person and part of the DNA and fabric of your company,
and organization or a community.
Doing is about applying
ethical principles to decision-making at every level of your practice.
Actively routing every decision you make in
ethical principles and make better solutions that work for everyone.
In moral philosophy, we have this term called ethno, human flourishing.
Human flourishing is generally the North star of ethics.
If we succeed at doing things that are ethical,
we ensure that human beings will flourish in their environment.
I think that this should be the North star for our work as well.
I've come up with a definition of ethical web design.
It is a draft definition and you can
have opinion that's on it and I would love to hear them.
Ethical web design is work furthering human flourishing
through ethical practice and methodology centered on the rights,
capabilities, and agency of the human end-user.
That's what we'll work towards from here on out.
Being first. The great thing is,
ethics is not a new science.
We did not just come up with it recently.
Ethics has been around for thousands of years and over time,
there are three main theories that have established themselves as
good guiding branches for how we think about ethics in the Western tradition.
They are utilitarianism, duty ethics, and virtue ethics.
Now, in moral philosophy,
these are all mutually exclusive and they compete against each other.
But in practical philosophy,
we use these in conjunction with each other
to take out each other's errors and to make it into a holistic approach.
How do we use this in design and development?
The first theory is one that you've come across all the time because it is
the de facto ethical framework would use in technology and design today.
It is utilitarianism, which is part of consequentialism.
Utilitarianism judges the goodness and rightness of
an act based on the utility of its outcome.
So a good act is one that produces a beneficial outcome to the most people.
Utilitarianism, like I said,
is de facto to ethical framework for design and tech.
We find it in our tools and our language like
primary user group designed for the majority edge cases,
80-20 principle, a graceful degradation.
All these terms are expressions of a utilitarian approach to design.
Good design is design that benefits the majority.
There's an obvious problem with this.
That is, who defines this majority and what about everyone else?
This is the standard critique of utilitarianism.
It's not related to design,
but it applies to design as well.
So knowing this is a problem,
how do we use utilitarianism in design?
The answer is, we focus on the utility of a design as in,
how much benefit design give to the people who use it?
Then we make sure to include those outside
the majority user group so that everyone is included in this utility calculation.
So yes, the captions and transcripts on everything,
audio and video because it helps everyone,
not just the people who need it and no to facial recognition of any kind.
Because no matter how convenient it is to order a burger with
your face or do whatever else you want to do with their facial recognition,
this technology is used to infringe on the rights of people all over the world.
Black Lives Matter, and black lives
are disproportionately attacked by facial recognition.
So utilitarianism, to sum up,
to use it in design,
we focus on utility for everyone.
Next up is duty ethics.
Duty ethics, AKA deontology,
judges the goodness and rightness of an act based on whether the actor, or in our case,
designer, acted in accordance with their duty to
the person acted upon and to the larger community.
Duty ethics is defined most famously by the categorical imperative, which says,
"Act only in accordance with that maxim through which you can at the same time,
will that it become a universal law."
In English, that means act in the same way you'd
want every other person to act in the same situation.
Like the golden principle, not quite.
The key to duty ethics is it's about principles.
When you make a decision,
you set a principle for other people to follow
and you judge the rightness and goodness based on whether or not you can say that,
"This principle is one that I want every one else to apply as well."
Edward Snowden has a really good statement in his book,
Permanent Record that talks about this.
He says, "Because a citizenry's freedoms are interdependent,
to surrender your own privacy is really to surrender everyone's.
Saying that you don't need or want privacy because you have nothing to hide,
is to assume that no one should have or could have anything to hide."
This is important because this tells us when we do things,
we're doing things on behalf of everyone.
Now, the challenge with the ontology or duty ethics
is there's this problem of ought and can.
The fact that we ought to do something does not necessarily mean we can.
There are many situations where you as the designer,
will be able to do the thing you ought to do.
You might work for a company that doesn't care about privacy and you have
no power to change the privacy settings of your application.
However, it is your responsibility to raise this issue
as high as you can and do your part to make sure it happens.
There's also an additional issue around values.
Duty ethics is based on values.
What you think is the right thing to do.
Values are really tricky.
That guy that published 3D prints models for guns on the Internet,
insisted that he was doing something he wanted everyone to do.
So technically from a purely deontological perspective,
he was doing an ethical thing.
The problem is, his values don't conform with the values of the general society.
So to use duty ethics in design,
we need to focus on our shared principle.
By focus, I mean figure out what our shared principles
are and then establish best practices everyone could follow.
We can do this by asking questions like,
is this a decision I'd want every other designer in the same situation to make?
If everyone did this,
would we be promoting human flourishing?
If you want to see a good example of how this fails,
if you don't do it properly,
look at the electronic scooters
that are littering sidewalks everywhere in the world right now.
If one company does it, it's fine,
but if everyone does it, it becomes a huge problem.
So yes to progressive enhancement,
because the web is built on the cow paths we lay down and
no two crypto miners as an alternative to
advertising because crypto miners will literally destroy the Internet completely.
Duty ethics summed up for design is to focus on shared principles.
The last of these three is virtue ethics,
and this is the philosophy that is hardest to wrap your head around initially.
Duty ethics looks at the character of the person doing the thing.
So what happens to the person once they do that thing and how does that define us?
It says that an action is good and right,
if it's done in a way that is
virtuous as in this is the thing a virtuous person would do,
which is infuriatingly circular,
and we can send a thank you notes to Aristotle for that horrible definition.
But if you look at what virtues actually are,
it makes more sense.
A virtue is a property defining and describing a person.
You can say, "Look, here's a person with courage or magnificence or pride or honor."
These are Aristotle's virtuous and they're not all that useful for us.
But Shannon Vallor our current ethician who works in the Santa Clara University,
has set up a new set of technomoral virtues for us.
Honesty, self-control, humility, justice, courage, empathy, care,
civility, flexibility, perspective, magnanimity and technomoral wisdom.
You can see how, if you ask yourself,
every time you make a decision,
is this something a person with justice would do?
Or is this a person who was courageous or an empathetic person?
Or a person who has care or is civil or has perspective or is magnanimous?
As long as your actions push you towards being the type of person who has these virtues,
you do the right thing.
Of course, the question here is,
how do we define and agree upon our shared virtues as a profession?
We need to have those conversations, and also,
how do we make sure that this virtue approach doesn't
lead us into this cage of conservatism.
By that I mean Aristotle thought of virtues as this idea that
a tree fully completed is the best version of a tree,
but a tree is very regionally defined thing or a horse.
A perfect horse is a horse that's perfectly exemplifies hoarseness,
[LAUGHTER] which is not very useful for us.
We need to figure out how to use virtues in a way that is useful to us,
and that starts by defining our shared virtues,
actually listing them out and saying these are the things we aspire to,
and then challenging our ideals,
idols and best practices.
All those codes of ethics and
Hippocratic oaths and everything else you're
seeing around design and technology right now,
that is the start of this process of defining our virtues.
These themselves are not ethics,
these are value statements based on virtues that people want to become.
These are the end result of a conversation we have not had yet.
We need to roll it back,
but the work put into these projects is very important.
The other part is,
we need to challenge our ideals,
idols, and best practices.
Why? Because for the most part,
web design and development is defined by people like me, white, heterosexual,
middle class men with stable and secure jobs top of the line equipment,
living in areas with fast,
reliable, high bandwidth Internet.
If we aim to make ethical decisions about web design,
our virtues need to reflect the diversity of our community and our audience.
That means not just bringing women and black and indigenous and
people of color and people reflecting the full spectrum of ability, neuro,
gender, and sexual diversity to the table,
but making them first-rate citizens of our teams
or processes or work and everything we do.
Absolutely yes to blocking harmful and hateful content online
because speech acts have real-world consequences,
and absolutely no to surveillance capitalism because
the wanton manipulation of human beings for profit through
design is not what a virtuous designer would do.
Also, buy this book and read it.
We have the three ranges;
utilitarianism, duty ethics, and virtue ethics.
They map really nicely to what we do as designers and developers.
Utilitarianism looks at the effects on society,
duty ethics looks at our intentions as designers,
and virtue ethics looks at what we become by doing these things.
However, there's this problem that there's a fourth quadrant that we haven't covered.
That's the user themselves.
Here we slot in a relatively new branch of moral philosophy called capability approach.
I mean relatively new as in was developed in
the 1970s and is still being developed today.
Capability approach judges the goodness and rightness of an act
based on a theoretical framework that has true normative claims.
First, that the freedom to achieve well-being is of primary moral importance, and second,
that the freedom to achieve well-being is to be
understood in the terms of people's capabilities,
that is, their real opportunities to do and be what they have reason to value.
Ask that last sentence into your head.
You want to make sure that people are able to do and be what they have reason to value.
An action is right and good,
if it grants and enables those capabilities in people.
To see this in practice,
we can think of a bicycle.
It might seem like a good idea to give everyone
a bicycle because then everyone has better mobility,
but if we look at the individual person,
you'll see that different people need different types of tools to achieve the same goal,
and we have to design solutions that work for the people in their own context.
Because if we just apply the same solution to every one,
we are not helping everyone.
Now you noticed in my definition of ethical web design, I said,
"Work furthering human flourishing through ethical practice and methodology,
centered on the rights,
capabilities, and agency of the human end-user."
That's what this is.
This circular approach here, capability approach,
utilitarianism, duty ethics and virtue ethics,
is what I call the four corners approach.
This is a way of looking at every design decision through
four different ethical lenses to question whether or not it's a good decision.
You start by asking about the capabilities.
Are we giving the end user the capabilities to do and be what they have reason to value.
If that's the case,
then we jump over to utilitarianism and say,
what is the utility of this decision on everyone who is affected?
Then we go to duty ethics and say,
are we setting the right principles and best practices by doing this?
Finally, we go to virtue ethics and say,
is this design and development decision,
the decision that someone with the virtues I aspire to hold would do.
That gives us a complete circle that allows you to
question all your decision-making through four ethical lenses.
That's the being part.
Yes, this stuff is really heavy, and that's what I was saying.
Ethics is a practice in the same way that meditation is a practice.
You actually have to practice this for a long time,
internalize and start living this way.
Start questioning everything around you in these terms to be able to move forward,
and it's important because we work in a very difficult environment.
Whenever I talk about ethics,
there's always someone who says,
I can't just up and quit my job because you
say something my company does is unethical and is destroying society.
I have a family, I have responsibilities,
and quitting my job isn't an option.
That's true, the thing is,
we as a community need to get together and figure out how we handle that problem.
Ethan Marcotte has suggested we create a union.
That's actually a good idea,
I'm not sure it's possible,
but it's definitely something worth exploring.
It's also how we build trust.
We need to build trust in ourselves,
for other people who interact with our products.
Like I said, we have work to do.
Now, you see all that and you go,
that's really heavy and theoretical and weird,
I need something more practical.
Remember I said there's being and doing.
The doing part is practical,
I want to introduce you to something I call the core capability framework.
It's really straightforward and it breaks down into how
you do everything in your regular work life.
If you think of capabilities,
you actually have four layers of capabilities in your work.
You have the core capability,
the thing you want the user to be able to do after interacting with their design.
Then you have the material capability,
these are capabilities lend to whatever you're designing by
the material you use. You have-
What happened? I lost control over my slides.
Hello, mouse help me.
There we go. Then you have the meta capabilities.
These are capabilities you can add to the material if you want to,
and finally, you have the environmental capabilities.
These are capabilities that are lent to the user by whatever environment they are in.
To see how this works in practice,
I've come up with a practical example.
Let's say someone puts this on your table and you-all worked for higher ed.
This is probably something you have to deal with right now.
University needs to communicate information about
safety measures to students and faculty for the return to classes in the fall.
This information must be accessible and understandable to all students and faculty.
This is really a life and death situation.
As a web designer, you are literally responsible for building a web portal that
will possibly mean the difference between life and death for some of the students.
How do you approach this in an ethical way?
You start by looking at the capability that we intend
to grant or enable in the user through our design, and make sure that,
that capability helps them to do and be what they have reason to value,
which in this case is staying alive through a pandemic.
This is the purpose of the design and this is the true north of your design.
You can measure your design by whether or not you
successfully transfer those capabilities to
the user and whether that user is able to use
those capabilities to be and do what they have reason to value.
The core capabilities asks,
what new capability does the user have after interacting with a design?
In our example, it's pretty clear, it is,
the information must be accessible and understandable to all students and faculty.
Then we can say the core capabilities here is to
make the information accessible and understandable.
Then we can look at how we're doing this
and we can measure all the outcomes based on these two.
Do people understand information and are they able to access it?
Next up comes material capability.
These are the capabilities furnished,
afforded or otherwise provided by the material use.
If you think about paper,
paper has material capabilities.
You can fold it, you can transport it,
you can write on it, you can copy it,
you can give it to other the people. You can tear it.
All these are capabilities and we ask ourselves,
are these capabilities going to help us further the core capabilities?
If they are, then we use them.
If you look at an HTML document,
you very quickly see the benefit here.
An HTML document give us information the capability of being sent over the Internet.
It gives the information that capability of being read as
plain texts, which furthers accessibility.
It gives the information the capability of being parsed in software supporting HTML,
which means Auto Translation and all this stuff.
We can use HTML to enhance the core capabilities.
In the case of our project,
you would maybe do multiple things.
You send out paper mailers and e-mail PDFs and
an accessible website and caption videos just to cover all bases.
Next step out is meta capabilities.
These are available extended capabilities afforded by the material.
They are latent capabilities and we can add
them to our project or hook them into the project.
In HTML that would be things like hyperlinks.
You don't need to use them, but they're there and they allow
for better possibility of content.
Every single HTML tag is
a meta capability because they add extra information to the data,
but they don't have to be there.
Alt text on images and all other accessibility features are
meta capabilities and you can anchor them directly to the core capabilities to say,
does adding this information make information more accessible and understandable?
Figcaptions, form input.
All those things are meta capabilities.
The key is meta capabilities must be appropriate and they must not be dependencies.
The fact that you use them does not mean that people must have access to them.
Meta capabilities could be things like correctly marked up texts and links,
video and audio captions and human-made translation, stuff like that.
The last ring in the circle is environmental capabilities.
These are the capabilities provided by the users environments,
and they are defined by being out of the designer's control, effectively known unknowns.
All browser features, reader mode and other content lifting and style splitting features,
Apple TV, Chromecast, anything that the user might
use changes the capabilities of the content you've published.
The key for environmental capabilities is,
we can't know if a user has these capabilities or whether they can use them or
even know of them if those capabilities are available to them.
When we develop and design things,
we need to think about things like
progressive enhancement to hook into the capabilities if they're available.
Adding lang attributes to HTML documents so that if Auto Translation is available,
that automatically kicks in.
Awareness of dark mode and other types of features that maybe there, PWA functionality.
You can look at things like capabilities added
by other apps including social media sharing,
screenshots and re-posting and so on.
If we look at environmental capabilities as a tool in
our task to give this information to people, we're going to say,
we need to provide easily sharable and Instagrammable images with info because then
people are more likely to share that information and
the information gets carried over into other media,
and we should make social media videos
with burn and captions because some people are more likely
to share that information as well because people tend to
like social media videos over a boring website.
That in a very short,
crushed coarse is the core capability framework,
and I hope you see that this is directly applicable to what you do.
You can use this framework to argue for accessibility and
for standard HTML and for more core practices.
Because we're going to can say all,
of this maps down to the capabilities were granting and enabling in
the end user and every decision we make as
designers and developers needs to anchor to those,
and then we can ask about the ethics of those capabilities,
whether they are the capabilities that people actually
need to be and do what they have reason to value.
To sum up, being is the four corners approach,
doing is to core capability framework.
These things work together to allow you to
have what I call a toolkit for tackling ethics smells.
Ethics smells is jokey term that we use in moral philosophy
to talk about all those things that feel like there's an ethical problem,
but you're not quite sure what they are.
Once you embed yourself in these ways of thinking,
you'll very quickly discover the ethics smells and you'll be able to
discuss them in a meaningful way that allows you to move towards solutions.
Like I said, ethics,
what we've been talking about here, leads to accountability,
and accountability is what allows people to trust
us as practitioners and eventually professionals.
Because like Rachel Botsman said,
it's not Oxford, Harvard," Trust is the currency of interactions."
I know this because I have a kid and I can see how when I interact with
him and the way I interact with him directly influences his trust in me.
The more careful I interact with him and the more I show him that he can trust me,
the more willing he is to do all weird things like use me as a climbing structure.
But when I talk about this,
there's always someone who says, "But Morten,
there's always someone careless enough or uninformed enough,
or untrained enough, or enterprising enough or bullied enough,
or desperate enough to do the unethical thing.
Even if I don't do it, someone else will."
This is true. This is what we call the problem of evil.
It's not a problem I,
or anyone else can solve.
What we can do is make the problem of evil less relevance by being humbled,
by being accountable, by being self-aware,
and by being graceful to ourselves and our peers whenever mistakes are made.
Like I said in the beginning,
ethics is a practice.
Ethics is what allows us to hold
ourselves accountable to our desire to shape the world to our vision.
Because with every design decision,
we build the future for our users and for ourselves.
Build the future.
That's the work.
What we can or cannot do,
what we consider possible or impossible is rarely a function of our true capability.
It is more likely a function of our beliefs about who we are.
Let's believe we are compassionate,
caring designers and developers who care about the people we work for and the work we do,
and let's build a better future together through ethical practice. Thank you.
