We have questions: Can self-driving cars ever be safe? How dangerous is Alexa? Will artificial intelligence (AI) take my job? Do cryptocurrencies empower terrorists? Can a cardiac pacemaker be hacked?

We have concerns: Fake news, fake ads, fake accounts, bots, foreign governments interfering with our elections …

Is technological innovation good or bad?

The courts have consistently decided that technology is neither intrinsically good nor bad, but they have expressed the opinion that people must be responsible and held accountable for how it is used. The problem is that technology is almost always ahead of strategy, tactics, and the law.

It’s illegal to txt and drive. Should it be illegal to wear an AR headset when driving? Should a provision for Level 5 driving automation, at which the system never needs intervention, be carved out in law? Title VII of the Civil Rights Act of 1964 prohibits discrimination based on race, color, national origin, sex, and religion. Should the Equal Employment Opportunity Commission (EEOC) have the power to regulate AI systems that train themselves to select or manage employees?

Fear, Uncertainty and Doubt

Once a beacon of optimism, the tech industry has come under pressure as concerns about potential negative impacts of innovation mount. Opportunistic politicians are preying on these concerns by sensationalizing or simply mischaracterizing potential outcomes to encourage support for new government regulations.

My colleagues at PwC and I agree that the time has come to seriously consider a responsible approach to innovation. We believe the circumstances require something new and different: a collective, self-regulatory approach from the key players in the industry.

At CES® 2018 we’ll present a discussion that explores the three basic approaches to the problem of regulating technological innovation:  

Print Friendly, PDF & Email