Many commentators have highlighted that one of the factors that put Mr. Trump in the White House was the concern around eroding jobs in the mainland. A large source of that jump disappearance has been rapid technology changes so today, I’d like to explore areas where the technology industry needs to start developing a code for more ethical practices being embedded into our offerings. This is an industry-wide call for more ethical technology.
Tech and Politics
At the risk of shocking many my readers, let me state that to date, there’s been a lack of political involvement from the tech community in the world of politics. Over the years, a large political divide seems to have opened up between the culture of Silicon Valley and the culture of other technology centers. A large portion of that divide is due to the isolated nature of Silicon Valley, where tech people spend time with other tech people and driving is the main way to get from point A to point B.
I’ve discussed this phenomenon in the past but it is one that is worth reinvestigating as the disease has only spread wider. When one creates a community of like-minded individuals, the kind of thinking that arises from said community pushes unpopular ideas out of the way.
For Silicon Valley, the basic truth of politics has broken along two paths: On one side is the idea that technology will solve all ills and regulations are largely a constraint to be routed around (the corollary being that getting involved in politics or policy is inefficient and thus should be avoided); On the other side is the idea that large companies can leverage policy to either block competitors or gain some other advantage (the corollary being that involvement in politics is a game best left to the bigger companies).
Outside of Silicon Valley, in places like Boston, New York, or Seattle, the lack of dominance from the tech industry has meant that companies born or run in those cities tend to be managed by individuals with more civic mindfulness.
Both of those views are rough archetypes but they are important ones to understand because they dictate the kind of companies that are built. The valley’s blind belief in technology as the savior leads to solutions that drive to increase individualization and customization. That drive leads to sorting of people in sub-groups and sub-sets consuming a personalized diet of products, entertainment, and news. The personalization that arises leads to increased isolation from people who are not following a profile similar to yours.
So today, the largest show on TV has 19 million viewers, a number that would not have placed it in the top 10 a decade ago. The shared experience of 3 broadcast TV channels or a few large newspapers and magazines are arbitrators of the mainstream is dead.
Cable TV news offering different channels for people on the right (Fox News) and the left (MSNBC and in a lot of cases CNN). And online news, which is in large part replacing traditional newspapers and magazines, has sliced the pie so thin through micro-targeting for advertising purpose that your news may be radically different from mine. Combined with analysis that presents what appeals to people and eliminates what doesn’t this result in a million spaces where like-minded individuals gather with other like-minded individuals and are “protected” from other groups.
Today, as a somewhat privileged member of the tech class, I can live in an enclave like New York or Silicon Valley, take an Uber to and from the airport (largely avoiding mingling with “locals”) while watching shows or listening to music that has been customized to my taste and reading or watching news targeted to provide me with the greatest entertainment and the least amount of pain.
This total isolation results in shock when the bubble gets popped.
Breaking the Bubble
The election of Donald Trump came as a shock to people in the technology world largely because it presented a vote from people who were not part of the same bubble. While the tech industry has seen increased growth, a large part of the blue collar class has seen its job disappear as a result of increased globalization (made possible by great technology tools like the Internet) and automation.
But let’s not just indict Silicon Valley here. That would be too easy.
While the Valley’s game has been to largely say “forget politics, we know better,” other technology centers fell under the slightly modified version of this by thinking “government can fix the big things so we’ll narrow our focus to things the government hasn’t fixed.” Both are trapped in their own type of bubbles. Here in New York, we’ve long thought of ourselves as more enlightened because we focused on issues of inclusion in our industry (that’s not to say that those issues are unimportant).
The challenge is that we’re all trapped looking at the world through the prism of navel gazing that has resulted from filter bubbles.
We must reinstate a world where it is OK to sit down with people you disagree with, converse, and find common ground with them. Disagreement breeds dialogue; dialogue breeds progress.
A Call for Ethical Technology
There is a long-held belief among technologists that “code is law.” In a world where artificial intelligence is increasingly a driver of dialogue, the urgency of defining what that means is increasing. I cannot claim to have the full answer for this but I’d like to propose a basic set of questions technologists should use when assessing how to move forward with a new technology:
- What does the technology improve? Does the improvement consider humanity?
- In the improvement(s) the technology makes, could there be unintended consequences? If yes, what are they?
- Are those unintended consequences hurting anyone? If yes, who, and why? If no, how are you sure or why not?
- Are there laws that are related to those unintended consequences? If yes, can the laws be evolved to balance the need for protection with the advance of technology?
- As the creator of the technology, are you willing to stand behind it 100% and be held accountable for the harm it may create? Have you taken all the necessary steps to avoid harm and unintended consequences?
Or maybe one can turn to a paraphrase of Asimov’s 3 laws of robotics and turn them into technology ones:
- A technology may not injure a human being or, through inaction, allow a human being to come to harm.
- A technology must obey orders given it except where such orders would conflict with the First Law.
- A technology can protect its own existence as long as such protection does not conflict with the First or Second Law.
By applying such filters, we may be able to not only build new paths but do so in a way that works for everyone.
As technologists, we have a mission of building the future. Over the last few years, we’ve been tested in terms of building a better world and sadly I must report that, as an industry, we have failed. The disconnect highlighted by the recent election should serve as a wake-up call to all developers: the era of developing software without thinking of its ethical implications is over. It’s time to “pivot” to ethical technology if we want the march of progress to continue.
Also published on Medium.