There has been a lot of discussion around the impact of AI and how we should try to understand what that impact could be, but I think it’s time to create control mechanisms that minimise potential catastrophic consequences.
Everyone saw the exchange between Mark Zuckerberg and Elon Musk. Everyone has been listening to the hails from global leaders on the potential and the risks of the technology. Many of them – such as Stephen Hawking – have been particularly focused on the view of how AI will become a threat for Mankind.
I’ve been rummaging quite a lot on the topic and because of Accenture’s recent announcement around joining the Partnership on AI (congratulations to the senior leadership team for becoming part of this), I thought this would be the right time to begin an iterative exercise to start defining what are some of the ideas that can help us keep an eye on AI…for now.
Without specifying it too much on industries or interests, there have been a few topics identified as highly likely to be impacted.
Understanding what are the business areas or job functions who are both generating the data and applying the outputs of AI would help in understanding which functions are more susceptible to automation. If there is then an AI that can identify where the skills of those impacted can be also applicable, or by suggesting the companies driving the automation with training routes for the employees at risk, it would make it much more transparent for employees, easier for companies to adopt while driving their AI agendas, but also easy for mass adoption of AI on large companies while reducing legal/HR risks.
Tough one to operationalise, but maybe having a repository of data points that correlate with each of the biases, particularly gender, racial and sexual orientation. It should also be a crowdsourced system that keeps evolving. It feels like a testing algorithm or bot, that any individual or company can deploy on their systems, and it finds relevant data points, functions and measures outputs. I wonder if we’ll ever have to create one to not discriminate AI…
The next Google, Amazon or Facebook will be decentralised platforms and communities, which will make early detection of the next generation of disruptors difficult. Network effects could catalyse an exponentially faster impact for these if we take into account how technologies like IoT, Blockchain and AI will turbo-charge each other. AI can be orchestrated to ensure there is a fair competition even in cutting edge trends and application of AI, and even if not directly, there are some controls that can be placed around it.
Applying AI to pre-empt and avoid any of the above seems to me like a no brainer – it also looks like a potential cross over with cyber security to ensure we can begin to identify problems and faults before they become catastrophes. A central repository for nasty occurrences triggered or fuelled by AI projects or use of its technologies and organisations that were responsible for it, and then applying an algorithm to triage the ideas/projects – or at least warn about potential risks and lessons learnt by others – sounds like a good idea.
THE BIG OPPORTUNITIES
I believe the use of AI will be accelerated as IoT and Blockchain become more mainstream, and play an active role in industries. When this occurs, there will be opportunities that will emerge, and some will be important to support.
Triple bottom line – I’ve seen the younger generations talk about wanting to have a good impact on the world, and these technologies have a tremendous potential. It struck me while I was evaluating the start-ups applying for ODINE, that this will be something people will demand. So, to at least try and even out their impact in the world, the environment, economy and people, World Leaders should begin to think about how can they have one of the points on their mission statements that reflects a commitment to either have a positive environmental, social or local community impact.
Waste economy – Logistical complexities, lack of information and real-time engagement tools made it very expensive and difficult for companies to keep track of all the waste generated by their supply chain, operations and products/services. A global marketplace for the reallocation of this waste would be of tremendous value. Food bank, UN and local charities would be able to vastly increase the means at their disposal (both goods and people) and consequently, their impact and response time, if we had a location-aware marketplace.
Open data – To try and give more people opportunity to embrace the potential of AI, and become entrepreneurs, or improve the way their organisations apply AI, there should be a layer of (anonymised) data from all businesses that should be made available to society. We’d probably need a system that helps track and guide the focus of what that data can be used for – sounds like a Blockchain initiative to me – in which the communities would submit their biggest challenges and would provide access to their data to entrepreneurs, universities, associations, etc.
What about the Regulators?
It’s up to the professionals working with these technologies to build the safety levers, but regulators need to play a more proactive and “digital” role. AI is all about data and augmentation through systems, devices and experiences.
Regulators need to be more actively involved with the use of the technology. They need to be where these transactions occur – in digital data and tools format – working as a network of validation APIs, developers, data scientists and companies can run their software by, while creating a fully transparent ecosystem, which opens the opportunity to use AI to identify the risks even before any transaction is ever made.
The realisation of the exponential value generated by Artificial Intelligence is not on the algorithms per se, but on the opportunity to apply them to generate value, whether they’re ours or crowdsourced.
I welcome your thoughts, ideas and comments, as I believe there is a lot of work to be done by many different entities. We need to start creating some tools the global AI community can use to help create a responsible and productive use of the technologies.
If well used, AI can help improve our lives, heal our planet, and help us make new findings that will enable us to build a better future for all mankind.
The million (or billion) dollar question remains: who will build these tools?
Tiny URL for this post: