Pursuing Digital Ethics: How Not To Mess Up with Technology

Pursuing Digital Ethics: How Not To Mess Up with Technology

Do you worry that the tech industry has forgotten to pay attention to the ethical consequences of technology decisions? Be reassured: Some people are still giving consideration to doing what’s right.

A recurring theme at the Gartner Symposium this week is extra attention on the human element in computing, despite – or maybe because of – analyst predictions of one in three jobs being taken by software or robots by 2025. One way or another, CIOs and other IT leaders are being urged to think about the human consequences of their use of technology, whether it’s Steve Wozniak discussing maker culture or the responsibility of digital leadership. As a dyed-in-the-batik hippie, I certainly don’t object to the Digital Humanist approach (and as buzzwords go, I like this one).

One session during the conference delved into this “food for thought” material in depth, and it was among the most enjoyable 45 minutes I spent at the week-long conference. Frank Buytendijk, a Research Vice President at Gartner in the area of Information Management, encouraged the session participants (and by extension the tech industry) to learn the art of ethical discussion in business. As he pointed out, the only areas in which we freely discuss ethics are in politics and religion – which are the two topics we are told not to talk about at work.

Yet so many of the decisions we make, on the large scale and small scale, are about doing what is right. What’s good for our companies, for ourselves, for the world, for the future? Buytendijk pointed out, “Every artifact is an instantiation of the morals of its design,” and discussed digital ethics principles that we can use to evaluate business and technology practices.

“Men have become the tools of their tools.”– Henry David Thoreau

Of course, doing the right thing means that someone has to decide what “right,” is, and in 15 minutes Buytendijk did an entertaining fast-forward through the history of philosophy, from Utilitarianism (the greatest good for the greatest number of people) to Liberalism (the natural right to life, liberty, and property, as espoused by John Locke) to Communitarianism (emphasizing community self-regulation). The participants were asked to criticize each viewpoint for its efficacy in making life better; as Buytendijk pointed out, the exercise demonstrated that no matter what philosophical idea we come up with, someone else surely thought of it first, and it probably has some usefulness in the correct context.

This is not meant to dissuade us from thinking about such things, but rather as a reminder to respect others’ opinions. By discussing digital ethics, he urged, pay attention to understanding your own reasons for believing the way you do, and also to recognizing the other view. Take it all in, Buytendijk said; solutions are better when they take all perspectives into account.

That sounds very high-falutin’, like late-night college dorm conversations (especially if your roommate was a philosophy major, as mine was). Buytendijk brought it down to earth with two apt examples: autonomous cars and the Trolley dilemma (given that autonomous cars will need an algorithm to decide which option to take), and a longer discussion of the ethics in data sharing.

Case in point: The Tom Tom navigational system. The company was the first to offer bidirectional traffic information in Europe, Buytendijk explained; for a few Euro each month, you could get both traffic information (where car accidents slowed down traffic along with automatic re-routing) and share your data with the company (e.g. it’d tell Tom Tom that you’re crawling along at 10kpm, which was more useful than the company relying on traffic reports, aggregating it into traffic data it broadcast). So far, so good: It gave the company a subscription revenue, turned customer value into innovation, and (since the shared data was more useful as the company grew) allowed the company’s market size to dominate the navigation market.

The terms and conditions to which subscribers agreed permitted the company to both collect the data and to sell it. Among the first to purchase the data (which was made anonymous; this was not a personal privacy issue) was the road works department; that let the people who designed the highways work with more geo-centric data and enabled them to do better analysis, presumably in the direction of making travel faster and safer. So far, so good.

But another organization to buy the traffic data was the police department. And instead of sorting it to show the slowest average speed, they looked at the areas where people drove fastest – and set up speed traps in those locations. Whatever you think of speed traps (safer driving, revenue generation, anyone?) this use also would set a precedent for private data being used by public services.

This did not end well for the company’s reputation, and it was, wholly, a matter of applied digital ethics.

So, how can you judge whether you’re headed in the right direction? And whether what’s good for the company is good for the world at large? Buytendijk’s working guidelines for digital ethics for innovators and business IT include:

  • Taking responsibility for unintended consequences
  • Using data for its intended purposes
  • Seeking value for all
  • Not mistaking pattern for reality
  • Applying the Golden Rule

In this case, Tom Tom broke the second rule, because data was used in an unapproved fashion. If I paid money to the company to share my traffic speed with the aim of improving trip navigation, I should not discover that the data is used for a wholly different purpose. It was fine to sell the data to the Road Works folks, who are on the same path (so to speak) of making my trips faster and better. But not to the police, who don’t always have the same goal.

This is only one aspect of the need for consideration of digital ethics in business and technology decisions, especially as we are on the cusp of so many life-changing innovations. “’Ethics as a service’ is going to affect us over the next 10 years,” Buytendijk said. As he said, it’s up to us to consider the results of our actions.


Esther Schindler

Esther Schindler, Druva's editor, has been writing for the tech press since 1992, and has been editor at industry publications since the late 90s. Her name is on the cover of about a dozen books, most recently The Complete Idiot's Guide to Twitter Marketing.

Esther quilts (with enthusiasm if little skill), is a top Amazon reviewer, and is an avid foodie. She works from her home in Scottsdale, Arizona, with one of two cats on her lap.

1 Comment

  1. Judy Shapiro 2 years ago

    Hi – Great post. My background with Bell Labs and security gave me a deep sensitivity to the issue. Here is an Ad Age article I did in 2009 raising many of the same issues: “In Web 3.0 We Trust — or Not. Why We Need a Return to the Human Side of Things” ..

    In the burst of frustration, I launched a venture to solve these issues & our code name for our venture was: The Trust Web … the web you’ll trust is the web you create for yourself.”

    A couple years on and we launched our alpha network … We just made finalist at ad:tech in NY presenting our approach. Fingers crossed

Leave a reply

Your email address will not be published. Required fields are marked *