tarsus.today
AI demands tough ethical questions AI demands tough ethical questions
To manage the ethics questions raised by AI, companies will need a multidimensional approach says Mike Rogers. AI demands tough ethical questions

South African enterprises will need to make complex ethical choices about how they leverage artificial intelligence (AI) over the coming years as lawmakers and regulators struggle to keep pace with the speed at which the technology is maturing and with the rate of adoption among local organisations. 

That’s the word from Tarsus Technology Solutions managing director, Mike Rogers, who says that the wide-ranging social and economic potential of AI means that companies cannot treat it merely as another software tool. They should also examine how it will affect their customers, employees and the wider society in which they operate. 

Says Rogers: “We anticipate that AI will become a foundational technology for most companies within the next five years, one with as much disruptive potential as the Internet and the smartphone. Given its potential impact on employment, consumer rights and the wider economy, companies need to take a proactive stance on the ethical issues AI raises. 

“If business does not take the initiative, we could see the promulgation of heavy-handed yet belated laws and regulations that hamper South African companies’ ability to use AI for competitive advantage. The way that companies use AI and other advanced digital technologies also has major implications for their reputations and their relationships with labour and customers.” 

One of the most pressing concerns is what AI means for labour in a notoriously inequal country where around a third of the workforce and half of the youth are unemployed, says Rogers. “During 2019, a dispute flared up between banks and trade unions with financial institutions looking to downsize and shut branches as customers migrated to digital channels,” says Rogers. 

AI and inequality 

“The ongoing skirmishes between metered taxis and Uber drivers is another example of how ill-prepared our economy is for advanced, job-displacing technologies. AI could potentially fuel similar conflicts by threatening a range of jobs, from driving trucks to telephonic customer support. Companies need to think about how they can take advantage of AI, balanced against the concerns of the workforce.” 

This will demand a strategic approach, driven from the CEO’s office, says Rogers. Organisations will need to work closely with their employees and trade unions to reskill people whose roles will be threatened by AI. This could be an opportunity to redirect people to higher-paying and more value-added and interesting work, but managing the transition will be fraught and complex. 

The role of AI in making decisions that affect customers, employees and other stakeholders is also likely to come under closer scrutiny, says Rogers. “We should not forget that AI systems are built by humans and that they leverage datasets that reflect the existing biases and prejudices of the world we live in,” he adds. 

Decisions that affect customers 

“We need to ensure that AI systems don’t make decisions that reflect systemic racial or gender discrimination, for example. As companies use AI to make or influence more decisions – ranging from granting loans or approving insurance claims to hiring employees – businesses need to be sure they understand how their data and algorithms work and that they have corrected sources of bias.” 

Rogers says that until the technology matures, and society is confident about its fairness and value, companies will need to be circumspect about using AI to make unilateral decisions that traditionally involved a higher level of human touch and judgement. “And when companies do use AI to support decisions that affect people’s lives, they need to be transparent about how the algorithms work and give people a right to appeal,” he adds. 

To manage the ethics questions raised by AI and other intelligent automation technologies, companies will need a multidimensional approach, says Rogers. The IT team, risk & compliance, HR, customer service and senior management will all need to play a part in shaping the organisation’s AI policies and programmes, he adds. 

“AI will potentially touch every aspect of the business, from its IT systems to employees to the customer experience to operations to its corporate values and ethical standards,” says Rogers. “AI ethics should be a boardroom conversation because of the technology’s potential to reshape how the entire business operates.” 

No comments so far.

Be first to leave comment below.

Leave a Reply