Societal aspects of Artificial Intelligence

A societal approach, based on fairness and transparency, needs to be adopted when developing and introducing Artificial Intelligence (AI) applications, according to Kiki de Bruijn, a cognitive neuroscientist working at Diffblue as an Artificial Intelligence Evangelist, who took part in the AI Expo held in London Olympia on 18 and 19 April 2018. De Bruijn declared herself a firm believer in AI’s potential to improve the way people live. She insisted that, in order to achieve that outcome, a debate on all the challenges it brings should take place in society. ‘’No concern should be left unaddressed on the road to progress. The views of the general public are as important as those of scientists and business leaders’’, de Bruijn said. She also added that AI needs a swift rebranding from a potentially risky technology to a sophisticated tool used for error-prone mundane tasks that do not require human creativity.

Another “AI winter”?

One of the significant challenges ahead of AI is the risk of another “AI winter”, the expert pointed out. The term bears an analogy to the concept of a nuclear winter. It describes periods in history of reduced interest in AI, caused by criticism due to lack of tangible progress. The scientific world has witnessed two major AI winters so far, one at the end of 1970s and another at the end of 1980s. Both were followed by periods of re-established investment. According to de Bruijn, another AI winter could lead to many projects being abandoned or deprived of funds.  However, the increased importance of AI as a technology tool in corporate business makes scientists believe that a new AI winter will not be as long and severe as the previous ones. It is worth mentioning that many medium and small businesses have also started investing in AI and its subsets, making it harder to reverse or put on hold the process of AI adoption.

All AI-triggered ethical concerns should be acted upon, de Bruijn noted. She pointed out how Star Trek character Jean Luc Picard’s analytical thinking and phenomenal decision-making abilities are not threatened by Enterprise’s AI-powered machines but facilitated by them. That kind of scenario can be replicated in the real life, turning AI into a smart helper for completing tedious tasks, allowing people to stay in control. Another important AI societal issue is concern about threats to jobs (for instance in call centres), as some sectors are beginning to experience tension in relation to this. De Bruijn suggested that jobs need to shift and evolve to stay relevant to the changing reality. ‘’AI needs to be developed in a fair and transparent way, leaving humans in charge of the ever-evolving environment without a sense of intimidation’’, she said.

Risk of bias

The AI evangelist also noted the risks of modelling AI solely on what she called ‘’the Silicon Valley Bubble’’, excluding all other human types and creating bias. AI that reflects just the mindset of its creators, with all its stereotypes and limitations, is considered among the major setbacks the industry must deal with. With the list of AI-supported processes constantly expanding (medical diagnosis, employment selection, loan worthiness selection, work place performance, etc), scientists need to make sure AI is tolerant, inclusive and non-judgemental when dealing with vulnerable people, marginalised groups and minorities in general. The same applies to one of AI’s subsets – Natural Language Processing. It still needs improvement when it comes to understanding dialects, for instance. That’s a flaw that has the potential to cause social exclusion of large groups of people if not addressed and corrected.

According to de Bruijn, AI’s future will be defined by its ability to become a tool, rather than a threat to humanity.

[Image licensed to Ingram Image.]

Add this: