Fork 0

76 KiB
Raw Permalink Blame History

The Ethics Void

% citations will be grayed and pushed to the right margin ≤t\origcite\cite % incite = "inline" cite \def\cite{\hfill∈cite} ≠wcommand*{∈cite}[1]{{% \tiny \raisebox{1ex}{% \color{lightgray}% \origcite{#1}% }% }}



We Are Everywhere (Introduction / Opening)

Let's Switch Perspectives   B_fullframe

I'm really excited to be back this year and to switch perspectives. Last year was all about "them"— "us" versus "them". But framing those issues as such doesn't permit the type of perspective that I'm interested in pursuing this talk. I'm going to reframe these issues. I'm instead going to refer to a collective "we".

The "we" I am referring to is anyone and everyone that has influence over others with technology. That includes people that many of us here probably wouldn't want to affiliate ourselves with. Because we're all in this together. We all contribute to the future of the world we live in. And we have all contributed to the present in some way, directly or indirectly, though action or inaction. To distance ourselves from what we would consider to be "them", to distance ourselves from what we perceive as bad, would be an attempt to absolve ourselves of responsibility.

Because we are all responsible.

``Us'' vs. ``Them''


``We'' Are All Responsible

Pervasive Technology   B_fullframe

Technology pervades nearly every aspect of every modern user's life. And it even touches those that don't or might not have the privilege to use technology themselves.

Consequently, "we" collectively control nearly every aspect of modern users' lives. We touch, either directly or indirectly, nearly every person on this planet. Everything is affected by the consequences of our actions.

So, let's speak candidly to users everywhere, and to ourselves.

Technology Is Pervasive

We Control What You See and What You Do   B_frame

We control what you see. We control what you do.

News and information is targeted at you personally. Your devices hold you hostage, commanding you, rather than the other way around. The more that we fade into the background, as something that is so integral in your life that it isn't noticed until it goes wrong, the more ignorant you become of just what you are losing control of.


Bottom   B_columns
Left   B_column


Right   B_column


We Know Where You Are, Have Been, Will Be   B_frame

We know where you are. We know where you have been. We know where you will be.

The apps you install on your devices violate and spy on you. The cars you drive may track you. Cameras everywhere constantly surveil you, inescapably. We can track everywhere you go online. And data brokers aggregate these data and then sell you out to others, as a product.


TrustEV   B_columns
Left   B_column


TransUnion Trustev

Right   B_column


Bottom   B_columns
Bottom Left   B_column


Bottom Right   B_column


We Live Inside Your Home   B_frame

We live inside your home.

Microphones listening. Cameras watching. Your IoT thermostat or TV or bed or toothbrush or whatever it may be leaks precious information about you. And they might be hopelessly insecure, with no way to upgrade them but to replace them entirely.

You are under assault—not just by the makers of your devices, but also by those who can exploit them, sometimes easily and often automated.


Assistants   B_columns
Left   B_column


Right   B_column


We Observe and Influence Your Children   B_frame

We observe your children. We influence their behavior.

Children are some of the most vulnerable among us. What they experience now will shape the rest of their lives. And what we can learn about those experiences now will allow us to exploit them for the rest of their lives.

Top   B_columns
Top Left   B_column


Top Right   B_column


Bottom   B_columns
Bottom Left   B_column



Bottom Right   B_column


Any Of Us Can Do These Things   B_fullframe

Any of us here can get involved in these types of things. You may not be now, and maybe you never will be. But maybe your employer will one day ask you to do something uncomfortable. Or maybe you will find yourself in a situation where someone has done you or a loved one harm, and you consider revenge, knowing full well that it is within your ability to do so.

There have been studies about altruism. About those that would risk their lives to save others. When researchers interviewed these individuals— they noticed something in common with many of them: that they thought about that situation before-hand, perhaps many years before the actual event. They pre-committed. When the situation presented itself, they weren't caught off guard.

But what do we commit to?

Any Of Us Can Get Involved With These Things

But only some of us are prepared for when these situations present themselves

Moral Considerations

Something Feels Wrong   B_fullframe

Something feels wrong with the things I just covered. But that "something" is a bit different depending on who you ask. Here in this room, we are somewhat aligned by our interests, with I'm sure some notable exceptions. And that type of echo chamber can make it difficult to realize others' stance on these issues.

\Huge Something Feels Wrong

Snowden Revelations   B_fullframe

Let's consider the Snowden revelations as an example.

Edward Snowden addressed us at LibrePlanet 2016 as one of the keynote speakers. He received a 50 second standing ovation before he could even begin speaking. I was there. The energy in the room was unlike andything I has experienced.

With this group of people here at LibrePlanet, the consensus is clear: what Snowden did was more than just ethical: he is considered a hero and a whistleblower.

But not everyone thought that way. Then-congressman Mike Pompeo called for him to be tried as a traitor and receive the death penalty.

The thing is: he did break the law. He did reveal State secrets. He can be tried for espionage. So in the eyes of many citizens, that isn't just un/ethical— it is /treason.

Did Edward Snowden Act Ethically?

Received 50 second standing ovation during LP2016 keynote before he started speaking


Contrast: Eric Holder had to promise that the US wouldn't seek the death penalty in a civilian trial

Moral Relativism   B_fullframe

This difference in opinion is the topic of moral relativism.

Descriptive moral relativism simply acknowledges that such differences do in fact exist. This is usually the academic viewpoint.

Meta-Ethical moral relativism takes descriptive ethics a bit further and argues that "right" and "wrong", "good and bad", don't have any inherent meaning, because they are relative to the traditions and practices of individuals and groups of people. This directly contradicts those who believe in moral universalism— that there is some universal moral conduct that everyone should be able to agree on.



Moral Relativism

No Universal Code of Ethics

Consequentialism   B_fullframe

Consequentialists believe that the consequences of one's actions should be the subject of moral judgment, not the act in itself.

So Snowden and his supporters might treat the consequence of his actions— informing the public of unlawful abuse of power— as the subject of moral judgment. In this case, breaking the law was an acceptable and even necessary path to achieve that result. And so, consequently, it was morally acceptable.

You may hear this phrased as "the end justifies the means".

Now, despite all of these viewpoints, there are certain things that large parts of the world do recognize as unethical.


``The end justifies the means''

Human Rights

United States Declaration of Independence (4 July 1776)   B_frame

One of the most well-known sentences in the English language is the second sentence of the United States Declaration of Independence.

<recite it>

Removing the religious and gender biases, what this appears to be saying is that all people deserve these unalienable rights.

Yet during the 1958 presidential race between Lincoln and Douglas, Douglas argued that this sentence was referring to White men. Lincoln had a different interpretation— that this sentence was referring to the rights of all people. It is his interpretation that lives on today; it is his interpretation that we apply when we think of the Declaration of Independence.

We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty, and the pursuit of Happiness.

—United States Declaration of Independence

(emphasis mine)

Universal Declaration of Human Rights (1948)   B_frame

Original title: Coalition of Right and Wrong

Fast-forward eighty years. World War II was over. The horrors committed by Nazi Germany caused the world to think a lot about the rights of people. A few years later, The United Nations General Assembly finished the Universal Declaration of Human Rights.

The first article states:

<read Article 1>

Article 12 is particularly relevant:

<read Article 12>

Privacy is one topic that is fairly well researched by many communities, and represented in various codes of ethics.

All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.

(emphasis mine)

No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.

(emphasis mine)


2018 ACM Code of Ethics and Professional Conduct   B_frame

The Association for Computing Machinery— known as the ACM— created a Code of Ethics and Professional Conduct back in 1992. It is just now being revised, and is still in draft status.\nocite{acm:ethics-draft-3} How many of you here knew that the ACM had a code of ethics?


Even back then, it contained a principle of respecting privacy.

Draft 3 acknowledges: <read portion of §1.6>.

I put the two versions— the original and Draft 3— up for comparison. It reads: <read portion of §1.7 from 1992>.

It's interesting seeing how it has changed. Collection of personal information is no longer unprecedented— it is the norm.

So what does the ACM recommend that we do about it?

<read next>

Okay, this seems fair.

  • Originally created in 1992
  • Now being revised, still a draft\nocite{acm:ethics-draft-3}

Computing and communication technology enables the collection and exchange of personal information on a scale unprecedented in the history of civilization.

§1.7, 1992 Code

(emphasis mine)

Technology enables the collection and exchange of personal information quickly, inexpensively, and often without the knowledge of the people affected.

§1.6, 2018 Draft 3

(emphasis mine)

Computing professionals should establish transparent policies and procedures that allow individuals to give informed consent to automatic data collection, review their personal data, correct inaccuracies, and, where appropriate, remove data.

§1.6, 2018 Draft 3

(emphasis mine)

2018 ACM Code of Ethics and Professional Conduct   B_frame

The Code of Ethics does cover a few other important points which we won't be getting into here. But I do want to highlight a couple sentences from two paragraphs: <read below>.

What does this mean exactly? What are "legitimate ends"? And what "rights" are they referring to? Rights under the law? The EU has more privacy rights under the law than the US does, for example.

It also mentions the "minimum amount of personal information necessary". We can argue what exactly "necessary" is, but let's illustrate the point by entering a world where this type of thing actually does happen, believe it or not. A context where these sentences do make sense.

Computing professionals should only use personal data for legitimate ends and without violating the rights of individuals and groups. […] Only the minimum amount of personal information necessary should be collected in a system.

§1.6, 2018 Draft 3

(emphasis mine)

HIPAA   B_frame

HIPAA! The Health Insurance Portability and Accountability Act of 1996.

The medical field already does this stuff. HIPAA does many things, but what we care about here is its provisions to protect patient health records. It defines "protected health information", or "PHI".

Individuals are permitted under the law to request their own records for inspection, and heathcare providers have thirty days to fulfill that request. The individual can correct information that is wrong.

HIPAA further restricts how PHI can be shared. Outside of certain defined cases, require explicit written authorization from the patient. And in either case, only the minimum amount of information necessary to provide the service can be shared.

  • <1-> Health Insurance Portability and Accountability Act of 1996
  • <1-> Defines Protected Health Information (PHI)
  • <2-> Can request own records for inspection
  • <2-> Can correct information that is wrong
  • <3-> Requires written consent for sharing PHI outside certain parties
  • <3-> Must disclose minimum amount of PHI necessary to provide service

When Is Data Collection Okay?   B_fullframe

So let's use that highly subjective term ``good''. Is HIPAA ``good''? Overall, it seems like it might be a pretty decent law with respect to patient privacy, for the aforementioned reasons.

So what is ``good''?

Recall that meta-ethical moral relativism holds that nobody is objectively ``right'' or ``wrong'', ``good'' or ``bad''. So we're just going to derive something within the context of this talk.

Let's consider a few more examples.

TransUnion's fraud detecton system, which uses all of these data from many different sources. Is that ``good''? Well, for people who want to detect fraud, perhaps it is. And to detect fraud accurately, you need a lot of data. In the words of the ACM, is that ``legitimate''? Those data are used to provide a useful service.

But these data brokers aggregate swaths of data without the user ever being informed of the fact that it is happening. And the user can't inspect the data. Or correct it. Or opt out and delete it. And the sole purpose of data brokers' existence is to repurpose and resell your data; the user will never be able to consent to something when that ``something'' can be anything! Is that ``good''?

Would you say this is more or less ``good'' than HIPAA?

Let's consider another example.

Late last year, security researchers found that BLU Android phones— a popular cheap brand that serves advertisements— called home with contacts, IMSI numbers, text messages, telephone numbers, call history, and more. All of this without any consent. Researchers found this on accident— nobody knew this was happening!

Is this better or worse th— no, you know what? Nevermind. This is ``bad''. There's no ``good'' here.

Is HIPAA ``Good''?

What Is ``Good''?

Is This ``Good''?

More Or Less ``Good'' Than HIPAA?



Is This ``Good''?

This Is ``Bad''



Universal Declaration of Human Rights (Privacy)   B_frame

Remember Article 12 of the Universal Declaration of Human Rights? <read first sentence up to "correspondence">

No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.

(emphasis mine)

Privacy Is A Human Rights Issue   B_fullframe

Privacy is a human rights issue!

Privacy Is A Human Rights Issue

Introducing Personally Identifiable Information   B_fullframe

From a technical perspective, what is at the core of the privacy problem?

With HIPAA, we saw PHI. If we generalize that a bit further, we get PII— Personally Identifiable Information. This is the term you'll see used frequently in information security.

Personally Identifiable Information (PII)

Personally Identifiable Information (PII)   B_frame rmc

NIST is the National Institute of Standards and Technology in the United States. NIST Special Publication 800-122 defines PII as: <read it>.

This "linked" and "linkable" terminology can be subtle and confusing, and I unfortunately don't have time to provide examples. But in a nutshell, linked data is information that is logically assocaited with other information about an individual. Linkable data has the possibility for such an association to be made.

[…] any information about an individual maintained by an agency, including (1) any information that can be used to distinguish or trace an individual's identity, such as name, social security number, date and place of birth, mother's maiden name, or biometric records; and (2) any other information that is linked or linkable to an individual, such as medical, educational, financial, and employment information.

NIST SP 800-122\nocite{nist:sp-800-122}

(emphasis mine)

  • Linked—logically associated with other information about the individual
  • Linkable—possibility of such an association

Information Security Well Researched   B_fullframe

Unlike other topics related to morality, the nice thing about privacy is that it can be analyzed based on facts, not opinions. That isn't to say that there aren't opinions. Since we have defined what PII is, it is a fact whether or not some action leads to a violation of privacy because PII is mishandled.

So we can look toward best practices in information security for strong guidance in developing a code of ethics for privacy.

Best Security Practices Can Help to Guide Code of Ethics for Privacy

Organisation for Economic Co-operation and Development (OECD)   B_frame

The Organisation for Economic Co-operation and Development is an intergovernmental economic organization with 35 member countries. In 1980, they adopted Guidelines on the Protection of Privacy and Transborder Flows of Personal Data. Mouthful.

This framework is referenced both in US federal guidance and internationally. It also served as a foundation for the EU's Data Protection Directive. The European Union is known to have strong data protection laws— much stronger than the United States. Many of the notable privacy cases in recent news have come out of the EU, like Facebook's tracking of users across the web. I mentioned that problem in last year's talk.

So let's take a look at some of those guidelines.

  • Established in 1961
  • 35 member countries
  • /Guidelines on the Protection of Privacy and Transborder Flows of Personal Data/\nocite{nist:sp-800-122}, adopted 1980

    • Referenced internationally
    • A foundation for the EU's Data Protection Directive

OECD Guidelines   B_frame

Just note that they use term "personal data" instead of PII, which some consider to be more broad of a term.

<read them, inserting extra explanation as needed>

There are other government guidelines around the world with similar guidance, but they largely restate these principles.

Left   B_column
  • <1-> Collection Limitation
  • <2-> Data Quality
  • <3-> Purpose Specification
  • <4-> Use Limitation
Right   B_column
  • <5-> Security Safeguards
  • <6-> Openness
  • <7-> Individual Participation
  • <8-> Accountability
Notes   B_block

Limit PII collection; obtain lawfully and by fair means, with knowledge or consent of data subject

PII relevant to purposes for which they are used; accurate, complete, up-to-date

Purposes specified before or at collection; only used for stated purposes

PII should not be disclosed or used for unspecified purposes, except with consent or authority of law

PII reasonably protected against unauthorized access, destruction, use, modification, or disclosure

Policy of openness about developments, practices, and policies for to PII; establish existence and nature of PII

Right to obtain data in reasonable and intelligible manner; challenge denials; challenge to erase or amend data

Data controller should be accountable for complying with measures that give effect to these principles

Framework Code of Ethics: Transparency   B_frame

For the most part, these principles are fairly solid.

Let's start with our framework code of ethics.

First, we need <read it>.

Transparency isn't useful if a user doesn't know that the information exists, or can't understand it. Privacy policies, for example, are notoriously difficult to understand.

Machine learning is a big issue. Users have the right to know not only the data about them that was collected, but also what is being inferred about them.

I use the term "transfer" rather than "distribution" or "dissemination" because I want it to cover another important topic: data compromise. It's important that users know all parties that have their data, including parties that weren't supposed to have it at all.

Transparency in data collection; transfer; use; and methodology, with a clear and fair procedure to inspect and amend those data, both raw and derived

  • <2-> User must be made aware in an apparent and intelligible manner

    • Even for non-PII
  • <2-> Must be transparent with algorithms used for data processing
  • <2-> Compromise of data by an attacker counts as a ``transfer''

Framework Code of Ethics: Consent   B_frame

Once a user is aware of what he or she would be consenting to, we should require <read it>. PII must always be consented by the user in some way.

If the user explicitly enters PII, say to get an insurance quote on a website, then that counts as consent, since clearly the user knows that PII is being provided.

If any data— PII or not— is being sent to a third party, the user ought to explicitly consent.

Explicit consent to collection, transfer, and use of both PII and any data not offered by the user

  • PII must always be consented
  • Data explicitly entered by user is consented to first party
  • Any data transferred to third parties must be consented

Solid Principles, So Why Not Follow?   B_fullframe

I'd imagine that pretty much any individual would want their data handled at least in this manner, as a baseline.

Yet, that's not what we see from private businesses. We often see quite the opposite. Why is that?

Why Don't All Businesses Follow These Guidelines?

Surveillance Capitalism   B_fullframe

Because you're lucrative. You are a product to be sold. And collectively, we are worth a lot of money.

You may have heard the term "surveillance capitalism".

Companies try to extract as much information out of you as possible using increasingly invasive means, much of which I covered last year. There is a move toward providing a more "personal" or "relevant" customer experience to hide some of the surveillance, or to make data collection a necessity for some service. Or at least make you think that it is.

There's another consequence. This more "relevant" experience caters search results, new articles, and all sorts of stuff to you based on your opinions, beliefs, race, religion, age, gender identification, etc.

Surveillance Capitalism


``More Relevant Customer Experience''

Strong Influence Over Your Opinions and Actions

Universal Declaration of Human Rights: Opinion   B_frame

Let's go back to the Universal Declaration of Human Rights.

Article 19 states: <read it>.

Personalized services compromise this. And it's not just organizations like Facebook and Cambridge Analytica. Ad networks are everywhere on the web. Data are being collected everywhere you go. If you are researching a cold and find advertisements for cold medication on another website, that is no coincidence. You aren't being paranoid.

Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.

(emphasis mine)


You Can, But Should You?   B_frame

Many businesses think that, just because they're following a law or regulation, that they must be acting ethically. But the law is just a baseline. The law may even be completely misguided or unethical to some; remember the mention of moral relativism earlier.

Most of the people in this room probably have strong feelings against the Digital Millennium Copyright Act, for example.

And this raises an interesting problem with the guidance we just talked about. The user should be made aware of the purpose of the data collection. But what many users don't understand is whether or not the data collection is actually necessary. The technical need might be arbitrary! This is where the term "legitimate" in the ACM code of ethics falls short.

  • <1-> ``We're following the law, so we must be ethical''

    • The law is a baseline
    • It may even be completely misguided or unethical to some (moral relativism)
  • <2-> You may be collecting data ``for'' the declared purpose, but do you really need it?

    • Is there actually a technical need?
  • <2-> ``Legitimate'' in ACM Code of Ethics falls short

Those Who Control

You Can, But Should You? Example: GPS   B_fullframe rmc

How many people here think it's possible to use the Global Positioning System anonymously? For example, the GPS receiver in your mobile device or your car.

<wait, react>

There have been so many privacy issues surrounding GPS that people just assume that it's synonymous with surveillance. That's not true. GPS only broadcasts data. The GPS system has no idea who is using it— it is always broadcasting for anyone who wishes to receive it.

So when a program uses GPS to provide location-aware features, it doesn't necessarily have to call home with it. There's no reason why map software can't operate without network access, for example, if you pre-download map data. In fact—some do.

Example: \medsubskip Can You Use GPS Anonymously?

GPS Only Broadcasts

Left   B_column


Right   B_column


Bottom   B_ignoreheading
  • <2> Even some GPS mapping programs can work just fine without network access (e.g. OsmAnd)\nocite{osmand}

Software Cannot Be Trusted   B_fullframe

The privacy threat is the software. Security experts caution against turning GPS on because the software on your device can usually not be trusted!

So when you see headlines like this one: <read it>, the problem isn't GPS, it's the individual program. The people writing this program are to blame.

Software Often Betrays Users


We watch how you drive from home to the movies. We watch where you go afterwards.

—Mitch Lowe, MoviePass CEO

No Transparency Without Source Code   B_fullframe

I was just talking about transparency. The ACM Code of Ethics mentioned it. The OECD guidelines called it "openness".

We can tell the user what we want them to know. But there's only one way for anyone to truly know what a program is doing, and what data it is collecting.

What was that about transparency and consent?

Programs That Keep Secrets Aren't Transparent or Safe   B_frame

And the only way to know is to have access to the source code so that you, or someone else who knows what they're looking at, can inspect it.

But that's not enough to know what a program is doing. Just because you have source code doesn't mean that it actually represents the same software that is running on your system. To verify that, you have to be able to compile the software yourself.

As inconvenient of a truth that it may be for some, the only reason to ever keep source code from the user is to keep a secret. That secret may be something malicious like spying on the user, it may be a trade secret, or maybe it's just because the developer is embarrassed by the code— but those are all secrets nonetheless.

  • <1-> True transparency and consent requires ability to inspect source code
  • <1-> Users must be able to compile the code to have confidence that it actually represents the program being run

The only reason to hide source code is to keep secrets from the user!

Keeping Secrets Means Keeping Control   B_fullframe

And keeping secrets is the only way for us to keep control over you.

Remember, you are the product. If you could get wise by inspecting the program, you could fight back. If you had the source code and could compile it yourself, that means you could also modify it. You could remove those antifeatures. You would then be in control. How would we turn you into a commodity if you were in control?

Keeping Secrets ≡ Keeping Control

  • Ability to build form source gives the user the ability to modify the program and reclaim control

Universal Declaration of Human Rights   B_frame

Remember the Universal Declaration of Human Rights from earlier? Article 1 stated that <read again>.

Is it dignifying to have your privacy stolen from you? Is all of this acting in the spirit of brotherhood?

All human beings are born free and equal in dignity and rights. They are endowed with reason and conscience and should act towards one another in a spirit of brotherhood.

(emphasis mine)

  • <2-> Is it dignifying to have your privacy stolen from you?
  • <2-> Has everything covered been in the spirit of brotherhood?

Universal Declaration of Human Rights: Liberty   B_frame

Let's take a look at articles 3 and 4: <read them, 4 up until "servitude">.

The point I made at the beginning of this talk was that we are everywhere.

Everyone has the right to life, liberty and security of person.

(emphasis mine)

No one shall be held in slavery or servitude; slavery and the slave trade shall be prohibited in all their forms.

(emphasis mine)

No Servitude   B_fullframe

If we do not act properly, then by default, we hold the user in servitude to us. We hold the power over the user. We hold power over one-another.

The User Is Held In Servitude

Philosophy of Control   B_frame

There is this philosophy that the user is a subject to be controlled. And I don't think most people really think about it.

When we write software, we ask ourselves certain questions. Like "what should we allow the user to do"? I'm not talking about security. I mean, "what should we as developers allow the user to do with our software".

But instead we should be asking ourselves "What should we empower the user to do"?

Rather than wondering how to turn the user into a commodity like we've seen, we should ask how we should build mutual relationships with them.

Rather than trying to create vendor lock-in to keep users around, ask yourself how to earn the respect of users so that they come back under their own free will! Imagine that.

Rather than worrying about capitalizing on everything, let's learn how to socialize. Act in a spirit of brotherhood.

Don't Ask   B_column

Don't Ask

  • <2-> What should we allow the user to do?
  • <4-> How should we commodatize the user?
  • <5-> How do we lock in the user?
  • <6-> How do we capitalize?
Do Ask   B_column

Do Ask

  • <3-> What should we empower the user to do?
  • <4-> How should we build mutual relationships with the user?
  • <5-> How do we earn the respect of the user?
  • <6-> How do we socialize?
  • <6> How do we act in a spirit of brotherhood?

User Freedom Is Software Freedom   B_fullframe

Because we are everywhere, because the life of the user is so tied to software, we have no choice but to conclude that:

User freedom cannot be hard without software freedom. They are tightly coupled.

And since software freedom is tightly coupled with user freedom, and since freedom is a human right, I argue that software freedom is too a human rights issue!

User Freedom ≡ Software Freedom

Software Freedom Is A Human Rights Issue

Moral Imperative   B_fullframe

Software freedom defines a type of moral imperative. From the perspective of those who follow the free software philosophy, software that is non-free or proprietary is, simply, unethical.

A moral imperative is a type of categorical imperative in the deontological moral philosophy of Immanuel Kant, who defines the imperative as: <read it>.

Moral Imperative

Categorical Imperative

Act as if the maxims of your action were to become through your will a universal law of nature.

Immanuel Kant\nocite{kant:meta-morals}

What About Moral Relativism?   B_fullframe

Throughout this talk, I've been introducing moral philosophies that aren't always compatible. This is intentional, since "we" don't all share the same philosophies.

There are three types of moral relativism. We only went over two. If you recall, descriptive states simply that people have disagreements about what is ethical, and meta-ethical states that nobody is objectively right or wrong. The last one is "normative", and goes a step further. It holds that <read quote>.

I don't believe that a universal code of ethics can exist. But I also don't believe we should just tolerate others that do something we consider to be immoral. We should fight for what we think is right. But we won't always agree universally. And that's okay.

And why is it okay? Because that's a human right— freedom of opinion and expression.

I may wish for a universal moral of software freedom, but I recognize that such a wish is logically unattainable.

What About Moral Relativism?

Normative Moral Relativism

We should fight for what we think is right!

But we won't always agree universally.

Holds that, because nobody is right or wrong, we ought to tolerate the behavior of others even when we disagree about the morality of it\nocite{w:moral-relativism}

Everyone has the right to freedom of opinion and expression; this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers.

(emphasis mine)

Framework Code of Ethics: Serve the User   B_frame

But I'm going to try anyway! And it's a simple statement: serve the user, not oneself.

Serve the user, not oneself

A Moral Speedbump

A Moral Foundation: The Four Freedoms   B_frame

It's very possible that some of us in this room disagree with my statement that free software is a moral imperative. And that's because we have two very close and overlapping communities that both create free software, but diverge wildly on the principles.

These are the "four freedoms"— the moral foundation for software freedom. These define the imperative.

Some found that these freedoms have a corollary: that it sometimes produces higher-quality software than proprietary models. They coined this development model as ``open source''.

The problem is… they dropped the moral foundation from which it originated so that they could advocate the development model to businesses.

That itself is a red flag. Businesses are turned off by issues of morality?

Four Freedoms   B_column
  1. Run program for any purpose
  2. Study and modify to suit your needs
  3. Share with others
  4. Share changes with others
Corollary   B_column


``Open Source''

Development model for creating potentially higher-quality software

Why Is ``Open Source'' Popular?   B_frame

"Open source" is popular. It is widely encouraged in many software communities, and has even made its way into proprietary ones where we never would have expected, like the walls of Microsoft!

But what are the reasons?

Well, foremost, it is a development model that claims to produce software that is of superior quality to proprietary software. You may have heard the phrase from Eric Raymond: ``given enough eyeballs, all bugs are shallow''. Except that's not necessarily true.

Some people like ``open source'' because other people will fix bugs for them.

Some people do it just to fit in with the crowd. Or because it looks good on a résumé, or to attract talented candidates to their business. Some people do it because it feels good to give back.

  • <1-> ``Given enough eyeballs, all bugs are shallow'' (Eric S. Raymond, ``Linus's Law'')

    • A successful development model
    • But it's not always true
  • <2-> Other people can fix bugs for me
  • <3-> Everyone else is doing it!
  • <3-> Looks good on a résumé / recognition
  • <3-> Attract talent to business
  • <3-> Feels good to give back

Open Source Misses the Point   B_fullframe

But we often say that open source misses the point of free software.

When someone finds that there is a proprietary program that works better for them, they'll use that instead. The free software philosophy argues, however, that a free program is always superior, because it respects the user's freedoms.

Open Source Misses the Point


Perpetuating An Ethics Void   B_fullframe

This talk has been about an ethics void. A lack of discussion about morality. And the true light to providing that morality is software freedom.

But when we talk about "open source", we're confounding the situation, because we're talking about software freedom without the moral aspects. It's detrimental. It perpetuates the void. Some in the open source community are even hostile toward software freedom.

As two communities that deeply overlap— both creating free software— we want to be able to get along. And we largely do.

But "open source" rebranded the corollary and left the moral foundation behind. Why should we be surprised, then, when we don't talk about ethics in software, when the two most popular models— proprietary and open source— avoid it?

Now, to be clear: open source is not a scapegoat for this talk; don't walk away thinking I said that it is.

Open Source Perpetuates the Void

Conformity Bias / ``Groupthink''   B_frame

Here's a question: Which of these three lines is as long as the first?

This isn't a trick question.

A psychologist found that, when he asked subjects to answer a question like this one, but put them in a group that gave obviously incorrect answers, many people became uncomfortable giving the correct answer, or even purposefully gave an obviously incorrect answer just to fit in with the group.

\hfill ===================================

Which line is as long as the first?

(1) \hfill =================================

(2) \hfill ===================================

(3) \hfill ==============================

Solomon Asch, ``Opinions and Social Pressure''

Follow the Leader   B_fullframe

People follow their community and their leaders. That should come as no surprise.

So when we have people actively working against the free software community, we have a problem. Tom Preston-Werner, one of the three founders of GitHub, wrote an often-cited post entitled ``Open Source (Almost) Everything'', in which he described all the valuable ways to exploit people to do your bidding, and told people not to liberate anything of actual business value.

As long as we have people saying things like that, and as long as we have people encouraging the use of permissive licenses that allow others to violate users' freedoms, and encouraging collaboration on sites like GitHub that discourage good software practices and is itself proprietary, then we are fighting an uphill battle almost from within.

People Follow Their Community and Leaders

Dont open source anything that represents core business value.

Tom Preston-Werner, GitHub Founder

``Open Source (Almost) Everything''\nocite{os-almost-everything}

Misjudging Oneself   B_fullframe

Let's take a step back from open source. Let's look at the lack of moral guidance as a whole.

Some studies have found that 92% of Americans are satisfied with their own moral character. Further, 7580% think they're more ethical than their peers.

Yet despite this, many people don't think about ethics in software despite moral issues staring them in the face. So what's going on?

92% Americans Satisfied With Own Moral Character\cite{jlse:behavioral-ethics}

7580% Think They're More Ethical Than Peers

Moral Clarity   B_fullframe

Moral myopia is a term used in behavioral ethics— it is described as a distortion of "moral vision" that makes it difficult for ethical issues to come into focus. It's enforced by rationalizations. I used the example earlier of "if it's legal, it must be moral". Maybe you recognize the value in free software, but don't see a problem with keeping the good stuff proprietary because you did a good deed by liberating some of your code. Maybe you think that pervasive online tracking is wrong, yet you use Google Analytics and Facebook "like" buttons on your own website because you don't see your actions are contributing to the larger problem.

Another concept: Ethical fading is when people focus on other aspects of a decision, like profitability, and don't see the ethical issue. Maybe saying, "we're not spying on you, we're just gathering detailed usage statistics".

Let's further that: Moral disengagement creates an almost alternate reality to rationalize bad decisions. For example, "we didn't violate our consent decree, it was just a bad actor".

TODO: images of examples

Moral Myopia

Difficult for ethical issues to come into focus

Ethical Fading

Distancing self from unethical implications

Moral Disengagement

Creating another reality to rationalize actions

Judged By Inaction   B_fullframe

We need to stop making excuses for ourselves.

Don't be judged by your inaction.

Consequentialism also holds that inaction is judged no differently than an explicit action, because both may result in the same consequence.

Another bad example of inaction is IoT security.

Don't Be Judged By Your Inaction

Framework Code of Ethics: Be Mindful   B_frame

Inaction is sometimes due to a lack of care. With respect to the other principles in this framework code of ethics:


I ask that we keep up with events and learn from them, and adapt. And that business actually put money into educating their employees and securing their products and services. Make consideration of ethics part of your development process. And always ask yourself, "am I behaving ethically?"

Be mindful of issues that give rise to consequences in violation of these principles and act in good faith to mitigate those issues

  • Continuous education (self and corporate)
  • Make ethics part of your development process
  • Ask yourself: ``Am I behaving ethically?''

Framework Code of Ethics: Empower Others, Recursively   B_frame

And shouldn't we help others to achieve that very same goal? <read>

Don't just teach others about these topics— encourage them to in turn teach others. If I've talked about issues that are important to you, issues that concern you, then advocate for change!

Impart your knowledge, skills, and experience to empower others, recursively.

  • Teach others how to apply these principles
  • Teach others how to teach others
  • Advocate for what is important to you


Framework Code of Ethics   B_frame

This framework code of ethics, as I've called it, is not intended to be used as-is, and is certainly not comprehensive. Its purpose is to serve as something concrete to take away from this talk. To provoke thought. To start a discussion.

There is no universal code. But maybe enough of us can find something compelling enough to agree on.

  1. Serve the user, not oneself
  1. Transparency in data collection; transfer; use; and methodology, with a clear and fair procedure to inspect and amend those data, both raw and derived
  1. Explicit consent to collection, transfer, and use of both PII and data not offered by the user
  1. Be mindful of issues that give rise to consequences in violation of these principles and act in good faith to mitigate those issues
  1. Impart your knowledge, skills, and experience to empower others, recursively

Pragmatic Ethics   B_fullframe

Times are changing. We see users becoming increasingly uncomfortable. We see lawmakers increasingly attentive.

Pragmatic Ethics is a theory arguing that it is society, not individuals, that achieve morality. That society and its norms evolve as a result of inquiry, and what is considered to be moral in one age may not be in the next. We can help to guide that direction.

Pragmatic Ethics

Societial norms and morals evolve as a result of inquiry

We, You   B_fullframe

That collective "we" that I declared at the beginning of this talk? The truth is that I bundled everyone together to give a sense of moral insecurity and urgency. "We" are not all the same. Here at this conference, many of us are free software advocates and activists. As members of the free software community, it is our responsibility to provide moral guidance to others. To connect with other communities.

Other fields have ethics built into their cirriculums. Health, law, even business. But I rarely hear of developers having been educated in technology ethics. If you are an educator, please, fight to encorporate these ethical issues into your cirriculum.

It only takes one voice within a community or organization to start a conversation and change how things are run. Let that voice be you.


Free Software Advocates



Thank You   B_fullframe

Thank you.

Mike Gerwitz



Slides Available Online



More Information: The Surreptitious Assault on Privacy, Security, and Freedom



Licensed under the Creative Commons Attribution ShareAlike 4.0 International License

References   B_appendix