close x


Enter Your Info Below And We Will Send
You The Brochure To Your Inbox!

aibotics go-ditial brochure (en)

Thank you!
Your submission has been received!

Oops! Something went wrong while submitting the form


Trust in AI

Mr Teng Chuan Hiang

Mr Teng Chuan Hiang (Chair of Strategic Committee on Ethical and Responsible AI (ERAI) – Asia Pacific Assisted Robotics Association)

Any relationships whether with another human being or with a machine, Trust is the bedrock for any productive outcome to arise.

Data. Algorithms. Transparency. Accountability.

Trust is the foundation of all productive and conducive relationships for society to thrive and prosper.  Now we have new “member” in our society in the form of AI manifested in chat-bots, robotic machinery, sales assistance, automated legal advisor maybe counselor but definitely music composer.  This new “member’ will inevitably starts to build relationships on behalf of many businesses and even persons if they possess a personal one in time to come.  The reason we choose to transaction with any entity is primarily due to the trust earned by that entity over time communicating with the people there.  This same process of earning trust must be repeated when any business intends to use AI to full effect.  

When consumers speaks or works with such an AI application, their shared data will be “exploited” to serve them better.  In this regard the trust factor becomes paramount before consumers want to work further with them.  To demonstrate this fact, let us assume DuckDuckGo starts to gain the trust of consumers and advertisers more by making less money compared to Google.  DuckDuckGo is proving to return search results as good as Google but without the being as intrusive as Google in prying into our privacy.  It would be conceivable that over time DuckDuckGo will eventually emerge as the trusted platform and to some extend equalise the monopoly situation now in search marketing.  Power will corrupt and Google’s astronomical success is the cause of it.  Ignoring the trust factor by exploiting our data to maximise profits at all cost will lead to its eventual demise if left unguarded.  Killing the golden goose (consumer trust) is often the reason for the fall of all giants.  

The AI Trust Framework by ERAI

The diagram above was created in consultation with the strategic committee of Ethical and Responsible use of AI (ERAI), which has been established under the directions of APARA (Asia Pacific Assistive Robotics Association).  

Data is the knowledge an AI system needs to acquire from the person and to serve them better.  

Algorithms is the “intelligence” needed to use the data to make prediction, extrapolation, deduction and inferences to build relationships for a predefined intended outcome.

Transparency is making the necessary provision for the AI system to be understandable and auditable by third party without exposing trade secrets when investigation is required.  Contrary to the common notion that deep neural network is a black box, anything that is digital is retraceable if proper documentation and audit trail data are documented.

Accountability is the standard by which entities can be held responsible for violating it.  This calls for the community together with the practitioners of AI to come together to establish a gold standard for everyone to follow. The SGIsago presented at the World Economic Forum is an example of such documents for organisations to follow.  There should be other ethical standards including at the coding level so AI practitioners can be held accountable for any violation of the standards.

This is work in progress but we are heading in the right direction with an interdisciplinary team of healthcare practitioner, lawyers, entrepreneur, venture capitalist, AI expertise and social services leader.  Our mission is to irrigate the community with knowledge and understanding to prepare them for AI to bear fruits in the long run.  Any economy that can exploit this technology will inevitably forge ahead exponentially once AI is deeply integrated into the four constituents of any society.  

More importantly the AI implemented is cultural specific embedded with moral values that the users can resonate with.  Cultural sensitive AI with embedded moral values such as multi-ethnicity and respecting these values will go a long way in earning the trust from the community it serves.  If anyone is going to share personal data with any entity knowing that the algorithms consider such values and mitigate them accordingly will definitely go a long way.  Such AI implementation is equivalent to teaching a kid with an IQ of 200 and computational power of a super computer to respect humanity and its moral values before using its intelligence.  

We are in the process of publishing a white paper to provide more specific guidance on how the trust framework works during the upcoming AIBotics 2020 Conference in Singapore.  

AIBotics 2020 will be held in August 2020 with a 2 Day Conference and Exhibition, plus an additional day of workshops and tutorials. The theme for 2020 is “Augmenting the Human Potential”.  This understanding and implementation of AI in Robotics across the wide range of applications aim to bring together the most progressive end-users, first-class speakers, and innovative solution providers.

more posts