.

GOVERNANCE MATTERS

March, 2019


  
 Welcome to our latest members of the Global Governance Community:

Corporate member dedicated to governance excellence: 



Identiv is a global provider of physical security and secure identification. Their solutions address the markets for physical and logical access and RFID-enabled applications. They secure access to the connected world: from perimeter to desktop access, and from the world of physical things to the Internet of Everything.


and new director member: 

Cindy Jabobs, Ph.D.




A 30+ year big-picture, international pharmaceutical executive and board adviser, Dr. Jacobs has a deep contextual intelligence (public and private companies) at the intersection of strategy, risk and regulation



LATEST BOARD LANDINGS

We are pleased to share that Tivic Health has added two new directors to its board from our Board Bona Fide Registry. 






           Dean Kikria
Dean brings more than 25 years of experience in life science: commercialization/marketing/sales, business development, strategic planning and finance. Dean has gained expertise in the following spaces within the life science industry: medical devices/technology, digital health, medical/pharmaceutical distribution, life science start-ups as well as Fortune 25 pharma and medical device companies. Dean is currently the president of DZAdvisors, LLC, where he counsels medical device companies on: commercialization, fund raising, crafting partnerships, building teams and developing products. Dean has also recently been a CEO at a Silicon Valley MedTech startup and in the past, he’s held senior management positions at Johnson & Johnson, Pfizer and McKesson.




     Jorge Titinger

Jorge L. Titinger is a highly analytical and incisive senior executive and Board member, with 30+ years of progressive leadership experience directing substantial growth and leading the turnaround of underperforming organizations. He is an accomplished Board Member and C-level executive with expertise in large multi-billion dollar organizations and startups focusing on the Semiconductor, Computing and Data industries. Mr. Titinger is a resourceful and strategic thinker known for producing results by developing strategies, establishing new processes, building effective leadership teams, driving innovation and implementing initiatives spanning all business areas.

 

 Upcoming Events


 

  MARK YOUR CALENDARS! 


Give youself a new year resolution treat and register for

MAXIMIZE YOUR BOARD POTENTIAL 2019!  

June 3-6, 2019 

Harvard Faculty Club




Nine years running, this invigorating program addresses the challenges of global governance.

For more information, go to:
www.boardwise.biz.




 Looking for Qualified Directors?

 Our international registry includes vetted, qualified directors from around the globe. 

You can join our registry as well to be considered by our corporate members who seek ideal directors for their boards.  

Contact us to learn more about how our Board Bona Fide Registry can help you!  







 

Director’s Duties in a World with AI

 

The Ethical and Reputational Risks of Artificial Intelligence

 

Guest Author, Reid Blackman, Ph.D.

 

 

 “AI algorithms may be flawed,” Microsoft’s 2018 annual report states. “If we enable or offer AI solutions that are controversial because of their impact on human rights, privacy, employment, or other social issues, we may experience brand or reputational harm.” [1]

 

Microsoft speaks from experience. In 2016, Microsoft released a chatbot powered by AI. In less than 24 hours the chatbot started tweeting misogynist and racist remarks. Amazon had a similar experience when, in October 2018, it was discovered that their AI powered hiring software discriminated against women.

 

There is no doubt that AI is here to stay and – undoubtedly - will create trillions in revenue. Yet, board members charged with sustainable growth and protection of their company’s brand must ensure AI is deployed in ways that manage the associated ethical-cum-reputational risks intrinsic to the technology.

 

What is Artificial Intelligence?

 

Let’s start with a distinction between artificial general intelligence (AGI) and artificial narrow intelligence (ANI). A machine with AGI would have an intelligence that largely mimics or mirrors that of human beings. It would be able to perform a number of tasks, learn how to do new ones, and would constantly improve its skill set. Of course, just as humans come with varying degrees of intelligence, so too would AGI. We can imagine some AGI’s with the intelligence of a five-year-old while others have the intelligence of a well-educated 40-year-old, and still others with an intelligence that surpasses anything a human has ever achieved.

 

AGI does not exist at present and it is hotly debated if and when it might. This kind of AI is not deployed by companies now. To be clear, I’ll only discuss ANI here, referred to as “AI”.

 

AI comes in different types. Here is a familiar one to start. You text your friends, family, and colleagues. You notice --as you type-- your iPhone or Android suggests how to complete the word you are typing or it might suggest the next word altogether. How does it do this?

 

Whenever one texts, Apple or Google is collecting that information (or as engineers say, that data). Once it is collected a computer program looks for patterns in that data. For instance, the program notices that, after someone types the word “good,” there’s a 44% chance they’ll type “morning” next, 32% chance they’ll type “job” next, and a 15% chance they’ll type “afternoon” next. Every other word in the dictionary has a less than 15% chance of being typed next. The program then does one last thing: it makes suggestions to the texter the next time they type in the word “good”. Those suggestions are the programs outputs. The computer program that does that – the “algorithm,” as it’s often put – is an artificially intelligent program.

 

Consider another example with more ethical risk.

 

Let’s get a computer program to “know” when there is a dog in a picture you just took with your phone’s camera. Let’s write a pattern-recognizing piece of computer software that will flash a green light when you upload a new picture of a dog and a red light when you upload a picture that does not contain a dog. How do we create that machine?

 

First, we get tons of pictures of dogs and we upload those pictures into our computer software. We tell the computer software – the algorithm – these are pictures of dogs, and any new picture that gets uploaded that is like these also contains a dog. The software will look for patterns in the pictures of all those dogs we uploaded.  Perhaps the software notices all the pictures have dark circular things at a certain distance from each other (eyes), or it looks for long, somewhat thin pinkish things (tongues), or it looks for similarities among dogs we, as humans, do not notice or consider, e.g., the degrees of the interior angles of the triangle formed by the eyes and the middle of the mouth. In truth, when engineers write these algorithms and tell them to look for patterns, they often don’t know what patterns the software is recognizing or finding, because sometimes it will look at things so granularly we can’t comprehend it.

 

Are you starting to see the risk yet?

 

We’ve trained our program so we can now upload our new photo that contains a dog. If our program is well made and recognizes patterns that indicate dogs, our uploaded new picture will get a green light.  Further, if we upload a picture of a cat, our software that nailed down that “dog pattern” will trigger a red light.

 

None of these programs are perfect, of course. Sometimes - say, 1% of the time - our software will identify the thing in the picture as a dog when it’s really something else; a wolf, say. That’s not a big deal, though, since we just want our newest picture of a dog to go with our other dog pictures; this isn’t life and death.

 

Until it is.

 

The Ethical Risks of AI

There are many ethical risks associated with AI. Three from our AI examples help explain how these risks arise and point to strategies to manage them.

 

Invasions of Privacy

AI requires lots of data.  When talking about data related to AI, “data” is really just a euphemism for “information about people.” AI programmers need lots of data to train their algorithms, which means lots of information about a lot of people.

 

This raises questions about how we collect that data in the first place, and whether it is done ethically. Did we get the informed, meaningful consent from those whose data it is or did we collect it in a way that violates their privacy? When they consented, did they know what we would do with their data? For how long we would save it? With whom we would share it? Whether we’ll sell it?

 

Here is the problem. Companies are incentivized to constantly surveil their actual and potential customers to learn more about them and to feed their AI programs to become increasingly more accurate (say, for the purposes of target marketing). That can lead to an astounding invasion of privacy. Consumers who discover they are being surveilled may very well create a backlash against those companies who fail to respect their privacy.

 

Board directors must ensure companies have systematic and robust processes for acquiring the data that feeds their AI algorithms in a way that does not infringe on their customers’ or employees’ privacy. Failure to do this may result in alienating those constituencies, not to mention inviting lawsuits. Microsoft, Facebook, Disney, Google, and other companies have faced lawsuits for just this reason.

 

Bias

 Let’s assume all the data inputted into the algorithm is responsibly acquired. Next is the issue of whether the data is biased in a way that could lead to discriminatory behavior. We saw this in the case of Amazon’s hiring algorithm.

 

Amazon receives tens of thousands of applications for employment. It takes robust human power to review those resumes, so Amazon created software to read the resumes and throw out ones unlikely to lead to a hire. It turned out, though, Amazon’s software discriminated against women. When the software recognized it was reading the resume of a woman, the software threw out that resume. How did this happen?

 

Like any other piece of AI, Amazon’s Human Resources AI Software first needed to be trained. It had to be given lots of resumes which were already reviewed by humans and told ‘the person who wrote this resume was hired’ and ‘the person who wrote this resume was not hired’. Then, the software took that information and looked for patterns of those people hired and not hired. The goal was to enter a resume and get a green light (“hire”) or a red light (“do not hire”) based on how well this new resume matched the resumes of those people hired in the past.

 

Amazon used the thousands of resumes it reviewed in the past decade for its training data. When the software searched for patterns in that data, it noticed something: women are not hired at the same rate as men. Thus, when a resume said, for instance, “Women’s NCAA Basketball,” it got a red light. This is discrimination at scale.

 

Board directors must ensure their companies have systematic and robust processes that vet the data used to train AI for bias. This holds for more than HR AI algorithms. It matters when you’re talking about credit scores, granting a mortgage, setting insurance premiums, target marketing, and more.

 

Unexplainable AI

 Artificially intelligent programs provide a wide variety of outputs. For instance, this person is high/low risk for a mortgage, this person’s credit score is X, this person should be hired/promoted/fired, this person should (not) be insured at this high/low premium, this person should (not) be admitted to this university, this person should (not) go on a date with this person, this person is (not) legally liable.

 

These big decisions have massive impact. Traditionally, these are decisions made by humans. As AI deployment increases, these decisions may be outsourced to AI algorithms. A problem is that we may not understand why the AI algorithm gives the output it does: why does this person get a green light and this other a red light?  In the dog-recognition software example, the AI may find patterns we had not considered or could ever understand, like the degrees of the interior angle of the triangle formed by a dog’s two eyes and mouth. The ethical risk consists in being unable to explain why a company treats consumers and employees as it does. It is not justified to say “that’s just what the machine told us to do.” Once again, the risks are high, both for brand reputation damage for unexplainable (and potentially unfair) decisions, and exposure to lawsuits.

 

Again, directors and their companies need systematic and robust processes around what people should do with the output of an AI machine. Blindly following the AI is not enough.  What to do with that information must be clearly articulated and understood. Directors are prudent to push engineers for “explainable AI,” that is, AI whose outputs are explainable to us mere mortals.

  

Final Remarks

 

Any company employing AI must address the ethical risks that, if not properly managed, threaten to damage the corporate brand and expose the company to litigation. Tackling these after a crisis occurs is painful and difficult. Companies and the directors whose charge it is to protect the long-term sustainability of their brands need to proactively weave ethical risk management strategies into their more general AI strategies.

 

To start, it is wise to implement processes and practices to protect companies when they purchase, integrate, and deploy AI. Consider the ethical-cum-reputational risks relating to privacy, bias, and explainability. Companies that don’t address these ethical risks of AI may find the harm they ultimately suffer from the realized risks exceed the gain from deploying AI solutions in the first place.

 

  Bio

 Reid Blackman, Ph.D., is Founder and CEO of Virtue, an ethics consultancy that focuses on corporate governance and emerging technologies like artificial intelligence, biotech, and virtual/augmented reality. He received his B.A. in philosophy from Cornell University, his M.A. in philosophy from Northwestern University, and his Ph.D. in philosophy from The University of Texas in Austin. He has taught at Colgate University and was a Fellow of the Parr Center for Ethics at The University of North Carolina in Chapel Hill. He currently sits on the committee for “Methods to Guide Ethical Research and Design” for the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems and is a member of the European Union Artificial Intelligence Alliance. He can be reached at reid@virtueconsultants.com.

 ______________

[1] https://www.wsj.com/articles/need-for-ai-ethicists-becomes-clearer-as-companies-admit-techs-flaws-11551436200

 

  


     



Facebook
Google Plus
Twitter