AI demands a contract of trust, says KPMG

(Image by Gerd Altmann from Pixabay)

The Summer of Artificial Intelligence in the UK is just around the corner. But only if we avoid the shadow of data bias and the chill winds of poorly designed algorithms, short-sighted goals and false beliefs in quick cost savings that I described in my last report.

But what practical steps policymakers can actually take to emphasize the positives and sidestep the negatives? Just as importantly, why should they bother?

One answer is that it’s not just about ethical behavior at the macro level—although that’s crucial in global, socially connected markets.

Leanne Allen is a Financial Services Data Partner at consulting and services giant KPMG UK. Speaking at the Westminster e-Forum this week about the next steps for AI, she explained that all organizations should use AI responsibly to make their businesses smarter and more responsive. In turn, other benefits increase—both economic and social. she says:

Consumers and investors in society are demanding more from organizations, and this is true across all industries. Whether it’s the benefits of better, frictionless service to consumers, the end of an “industrial experience” with greater personalization, or the hope that industries such as financial services should do more to help address inequality and promote sustainability finance.

Expectations for businesses to innovate and derive real value from data and new technologies continue to grow at a rapid pace.and employ advanced technologies including machine learning [ML] AI gives organizations an edge in responding to these demands.

In this sense, improving what Allen calls the “customer experience journey” is as important as the general desire for ethical behavior because businesses will become more thoughtful and sustainable in the long run, she suggests:

That is making better and faster decisions. This increases accuracy, which means customers are better understood, which leads to improved products and services. It’s all about pricing risk or pricing products more accurately and enabling step-change changes in operational efficiencies. Of course, this is really beneficial for organizations to reduce internal costs.

So, in Allen’s view, there is “no debate” about the benefits to organizations and customers using big data analytics and artificial intelligence. But users should avoid getting carried away by all these new possibilities. This, she warns, is where the real danger lies:

All of this potential comes with new and enhanced risks. In fact, we are already starting to see unintended harms if the design and use of advanced technologies are not properly policed ​​and governed.

Unfair bias towards decision model outcomes is causing financial loss to consumers and potentially reputational damage to organizations.Unfair pricing practices are causing systemic groups in society to [sic] Get locked out of insurance, which removes access to risk sharing.

Selling unsuitable or low-value products and services to customers is another example, or targeted advertising, dynamic pricing and “purpose creep” of data usage, leading to non-compliance with existing data protection laws.

These are just a few examples of the hazards and challenges facing the industry.

That’s quite a list of downsides. The knock-on effect is a loss of trust between consumers/citizens and those who trawl their data. For example, the impact could have profound effects on people’s credit histories and financial inclusion.

Also Read :  KU School of Nursing launches “Metaversity” offering virtual, immersive learning | Area

Here’s why policymakers should never—knowingly or not—put consumers’ trust at risk for easy wins, says Allen:

Trust is the determining factor in the success or failure of an organization. So, as companies rapidly grow and transform their business to be more data- and insights-driven, they must focus on building and maintaining that trust.

We see many organizations now taking their own initiatives to build governance and control around the use of big data and artificial intelligence. But the pace of progress does vary.

Often, we see financial services taking the lead and these organizations are defining their own ethical principles. They are implementing these measures and taking a risk-based approach, aligned with core principles of fairness, transparency, “explainability” and accountability. Collectively, those actively promote trust.

The “True North” of Corporate Ethics

Yet struggling consumers may take the idea of ​​financial services leading the way toward a fairer society with reservations amid a deepening recession. But we want the company to be sincere.

Also Read :  Top AI Tools/Platforms To Perform Machine Learning ML Model Monitoring

For Ian West, head of telecommunications, media and technology, a partner in different parts of KPMG’s UK practice, trust is the “golden thread” of the business. He added:

We need to ensure that businesses are ready to deploy AI responsibly. By developing five guiding pillars for ethical AI, KPMG distills the actions necessary to point organizations to the “true north” of corporate and civic ethics.

Talk about mixing your metaphors! But the West (or was it the North?) went on:

First, starting now to prepare employees is key. The most immediate challenge businesses face when implementing AI is disruption to the workplace. But organizations can prepare for this by helping workers effectively adapt to the role of machines on the job early in the process.

There are many ways to do this. But it’s worth considering partnering with academic institutions to create programs to meet skills needs. This will help educate, train and manage the new AI workforce, as well as mental health.

Second, we recommend the development of strong oversight and governance. Therefore, there needs to be an enterprise-wide policy on the deployment of AI, especially around data use and privacy standards. This goes back to the challenge of trust. AI stakeholders need to trust the enterprise, so it is critical that organizations first fully understand the data and frameworks that form the foundation of their AI.

Third, autonomous algorithms raise concerns about cybersecurity, which is one reason why governing machine learning systems is an urgent priority. Strong security needs to be built into the creation of algorithms and data governance. Of course, we can have a broader conversation about quantum technologies in the medium term.

Fourth, unfair bias can arise in AI without the right governance or controls to mitigate it. Leaders should strive to understand how complex algorithms work that can help remove this bias that emerges over time.

The properties used to train the algorithm must be relevant, on-target, and admissible. It’s arguably worth having a team dedicated to this, as well as an independent review of key models. Prejudice can have adverse social effects.

Fifth, companies need to increase transparency. Transparency is the foundation of all previous steps. Not only be transparent with your employees – which is, of course, very important – but also provide customers with the clarity and information they want and need.

Think of this as a contract of trust.

My point of view

well said. Therefore, the important lesson is not to sacrifice user trust for competitive advantage. Embark on a shared journey with your customers. Help them understand how you can make their lives better and your business smarter.

Also Read :  zkEVM could be the endgame for blockchain infrastructure


Leave a Reply

Your email address will not be published.

Related Articles

Back to top button