We still have trust issues with AI. How should we regulate it?
IT looks like boom time for artificial intelligence (AI) in Singapore. There are two main economic drivers for AI in Singapore, Mark Findlay, deputy director of the Singapore Management University's Centre for AI and Data Governance, tells The Business Times: the use of automation to supplement labour, and a worldwide push for AI to grow the economy.
Last year, the Republic topped the charts in the maiden Global Cities AI Disruption Index, and the latest Government Artificial Intelligence Readiness Index from Canada's International Development Research Centre. But doubts still linger. An EY poll last month found a regional
"AI trust crisis", with three-quarters of Asia-Pacific respondents citing transparency, bias or explainability as barriers to their confidence in AI technology.
Pierre Robinet, an Ogilvy Consulting senior partner who co-founded Singapore think tank Live With AI, also tells BT that concern can arise when "there is a lack of transparency, there is a lack of explainability, and people more and more want trust in the AI reasoning and outcomes". As a wary public recalls the plot of Minority Report - where the police crack down on predicted "pre-crimes" that have not yet taken place - AI advocates note that balance must be struck between useful, innovative solutions, and regulatory safeguards.
With the prospect of faster 5G networks around the corner, Singapore is already chugging ahead with AI. "We're seeing an uptake of AI in the transportation and logistics, banking and financial services and public sector in Singapore," says Asheesh Mehra, co-founder of AI solutions startup AntWorks, citing real-time cargo management and round-the-clock price comparisons as examples.
With clients like Changi Airport, homegrown video analytics and AI startup Xjera Labs already targets segments such as security, transport, and smart buildings. Its chief executive, Ethan Chu, expects more aggressive roll-outs in these areas, as well as medical and financial technology.
James Chappell, who heads AI strategy at multinational software firm Aveva, notes that industrial AI can do four key tasks: recognise patterns, detect problems, suggest optimisation, and predict future events.
While such "predictive maintenance" has become a common tool in smart factories worldwide, the concept can also apply to humans.
For one, "an increasingly sophisticated technology, AI could support preventive policing to bring about a safer community", according to an article in the Singapore Civil Service College's Ethos newsletter last year.
But Raymond Chan, senior data scientist at a Singapore tech company and chapter co-leader of non-profit group DataKind, argues that "human oversight should always be present". While humans may not make every decision, they "should be responsible for the process and be able to monitor and control decisions made by the system", he tells BT.
Reed Smith counsel Charmian Aw, who specialises in data and tech issues, adds that - even with AI-driven predictions - there should still be a human policy-maker in the picture. "Just because AI can help detect contagious disease or assess security risk in a person, ultimately the applicable criteria and thresholds to deny entry - and any appeals process that follows - needs to be determined by a human policy-making agent."
The second edition of Singapore's Model AI Governance Framework, which was launched at the World Economic Forum in Davos in January, includes a tool to rank use cases by high or low probability of harm, and level of severity of harm, to assess whether and how much human oversight is needed over the AI's actions.
"We will want the AI to be explainable," says Xjera's Dr Chu. "Let's say we need to allocate a lot of police resources to Tiong Bahru. We cannot just trust the AI blindly, saying: 'Oh, just deploy more force'. We need to ask the AI to explain why - is it based on historical data, or what."
Mr Mehra, from AntWorks, also explains that the rules cannot be one-size-fits-all: "The application requirements for AI in healthcare are different from banking requirements... Governments and policy-makers will need to work closely with professional bodies from each industry to better advise the decision-makers with regard to what the technology is needed for, how it will work, and even how it may impact the workforce."
Read the full story on The Business Times