Should we do this? Navigating risk and innovation through digital ethics in business

A few years ago, one of our clients in the medical equipment industry wrote to us with an interesting question. They wanted to combine all the collected (and privacy-protected) medical data from their different devices into a single platform. The idea was to develop innovative AI models trained on this data to offer next-gen diagnosis services, all of which were legally compliant. So, the question wasn’t if they were allowed to do this but should they do it.

Immediately, a host of ethical risk considerations jumped to mind: Where was this medical data collected? On which people would the service run the highest risk of failing or misdiagnosing? Who would be held accountable if the system were to misdiagnose a patient?  All the big questions around value-based principles such as bias and fairness, patient well-being, transparency, accountability, and many others.

Follow us on LinkedIn

Moving towards digital ethics

When it comes to product development in a traditional business environment, answering a question like Should we do this? would involve the strategy and legal departments. If it’s legal and if there’s a sound commercial incentive, the answer will usually be a resounding yes. After all, if a company doesn’t pursue innovation, its competition surely will.

But times have changed, and in an environment where digital innovations have moved beyond the law’s capacity to contain risks, merely asking if one is allowed to do something doesn’t suffice. There needs to be a structured way of thinking about innovating responsibly and mitigating ethical risks. This thinking was the inspiration behind the Digital Ethics services at Deloitte.

Digital elephants in the room

For their part, the European Commission is making strides to provide value-based guardrails. But these guardrails are often viewed by industry as stifling necessary innovation. And herein lies the difficulty of dealing with ethics in a business environment: Ethical risks can be mitigated structurally and systematically, but the fact that few people see this fuels a misconception that dealing with ethics will stifle innovation. The European Group on Ethics wrote in a recent report, Values for the Future: “Values and ethics are no limit or obstacle to innovation and change; they are the gist of innovation and change. They represent the compass indicating what responsible, inclusive and sustainable ways of future-making are.”

A further complicating factor is that the distinction between digital and regular business ethics isn’t fully understood. Many businesses still assume that if they comply with applicable laws and put some effort into sound business ethics, any potential negative effects and risks of their products are largely mitigated. For many traditional products, this would be true. However, this is not the case regarding digital products and services, including everything from computer vision to generative AI. Whether it’s bias, institutional blind spots, different priorities, stubbornness or ignorance, the gap between the intention behind a digital product and its real outcomes is largely ignored.

Take the example of a large retailer who, with the very best of intentions, developed an app for employees that helped determine their right size uniform based on a picture. A seemingly innocent idea, though things started to go off the rails when they optimised the system on a single-goal function, getting the best clothing size with the lowest possible error rate. As it turned out, the best way to do this was for the employee to take a picture of themselves in their underwear. This led to the company storing thousands of pictures of their employees in underwear. To optimise the system for the lowest possible error rate that’s a perfectly good idea. But as soon as the media picked this up, it caused outrage and significant reputational damage for the company involved.

A more serious example is the Dutch Childcare Benefit Scandal, where a similar effect can be observed. When a fraud prevention model is optimised on a goal function that only considers one perspective, in this case preventing fraud, other equally important perspectives like fairness are easily overlooked. The lesson in these examples is that despite good intentions and legal compliance when an algorithm is developed and embedded in an organisational context, those responsible must understand and mitigate ethical risks.

On the other side of the coin, there’s another elephant in the room: To respect fundamental rights better, we need digital innovations. In a century that will see major complex challenges with significant consequences for humans worldwide, businesses and governments cannot afford to forgo the benefits that technological innovations may bring. This means that digital innovations are as important for building competitive businesses as they are a moral imperative for respecting fundamental rights.

Where digital ethics meets business realities

Although not yet widespread, a growing number of companies recognise the challenge of finding the right approach to ethics and technological innovation. Does this mean increased adoption of ethics-based governance models for digital products? Most certainly, but there are still major challenges. As with all things in business contexts, someone needs to take ownership, get a budget and standardise a process.

Regarding ownership, we see no commonly agreed lines of responsibility or standards on how to do this. This is unsurprising as taking ownership of a new topic is a risky endeavour for the generally risk-averse crowd of the corporate landscape.

Many people that would take ownership are held off either because they don’t know that ethics can be done structurally and systematically or because they don’t know where in the organisation to address the topic. This leads many to think it’s a problem that can’t be solved or is too difficult. The outcome being businesses ignore the problem and continue business as usual, an increasingly risky and costly course of action.

For the companies that have crossed the first two hurdles, the challenge is to create an internal process that works consistently. Here there are a lot of interesting experiments, but it’s safe to say that there’s no ‘one size fits all’ solution yet. Common approaches that Deloitte has seen with clients are, for example, implementing mandatory impact assessments on algorithms, creating review boards, hiring an ethics officer or getting the internal audit function to determine risks.

A digital ethics culture?

But it’s not all about policies and operations. For digital ethics governance to succeed, the awareness and understanding of ethical risks need to reach all layers of the organisation where people interact with data and algorithms. Simply put, the adoption rate is largely determined by the company culture. As Peter Drucker famously said: “Culture eats strategy for breakfast.” This isn’t to say that digital ethics is squarely about culture; it’s very much about governance and operations. But an essential aspect of all this is to “train the ethics muscle” of everyone in the organisation not to create another impediment in the form of a check-box exercise.

Taking digital ethics forward

A few years ago, I spoke with Microsoft’s Chief Scientific Officer, Eric Horvitz, to ask how he thinks the right culture is formed. His thoughts echo our experience now: the most direct way for the right culture to emerge is if it starts at the top. There are other ways, but when answering the central question, Should we do this? most people in an organisation are greatly influenced by the intentions and values of their leadership.

Ultimately, it doesn’t matter who takes ownership or which department pays for it. By not creating a reliable process to get an informed answer to the Should we do this? question and addressing ethical risks, companies are gambling with their digital products and wasting critical opportunities to become more competitive and to respect fundamental rights. And that’s the truly unethical part of all this.

Disclaimer: The opinions expressed and arguments employed herein are solely those of the authors and do not necessarily reflect the official views of the OECD or its member countries. The Organisation cannot be held responsible for possible violations of copyright resulting from the posting of any written material on this website/blog.