The use cases of AI are continually evolving. We have seen a recent explosion in large language model (LLM) capabilities that have barely scratched the surface. But alongside an exploration of how AI can help us in our daily lives, significant investments are being made across new areas, such as AI for military capabilities. The future of responsible AI, however, cannot exist without humans.
The time spent thinking we may become obsolete provides the perfect opportunity for humans to realise how vital we are to the future; to exercise all the things that make us human such as reflexivity and reflectivity, critical thinking and emotional cognition. We must understand that the harms and biases proliferated by AI are a mirror of the existing fractures that exist already.
AI is a tool and if extra efforts are not taken to amend the existing biases which it may have picked up through design, training data or lack of robustness, it could result in harm at unprecedented speeds.
In The Netherlands, for example, 1.4m people were wrongfully flagged as fraudulent by an AI-powered risk profile system due to bias, which resulted in tens of thousands of people being pushed into poverty and thousands of children mistakenly taken into foster care.
The question then becomes not whether AI is going to replace us but what role we as humans play in ensuring that AI does not promote harm or have irreversible effects on humanity.
While some sceptics might say that AI will eventually become so complex that humans will not even be able to understand it – some argue that we are already here. This adds to the argument that human oversight is necessary for the future of ethical AI.
Suppose the trajectory of AI developments is heading for a future when our understanding of the technology needs to catch up. In that case, it is imperative that we must start prioritising the development of human-led safeguards, standards and assessments.
This is not a novel concept. Across AI domains, from generative AI to military uses of AI, there have been calls for the role of humans in detecting risks, harms and gaps in robustness. In February 2023, for example, the US State Department released the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. The declaration, signed by 63 countries, including the US, China and the UK, puts forth a list of 12 best practices for responsible AI in the military domain, four of which relate directly to the role of humans in responsible AI: human control, human oversight, human judgement and human training.
In July 2023, OpenAI, Anthropic, Microsoft and Google came together to launch The Frontier Model Forum, an industry body overseeing the safe development of the most advanced AI models. ‘Companies creating AI technology have a responsibility to ensure that it is safe, secure and remains under human control,’ Brad Smith, the president of Microsoft, said in a statement. ‘This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.’
So how exactly do we operationalise human oversight to serve our future interests?
London-based software platform Holistic AI has been a pioneer in this field since 2020 by promoting the essentiality of establishing governance frameworks for AI within organisations and enterprises to enable AI systems that are explainable, robust, unbiased and not susceptible to privacy breaches. Critical metrics of the robustness and safety of AI systems lies in how transparent their deployment is. This includes people interacting with AI systems being told about publicly available information on how a system works. This is not possible without human oversight.
Sign up to our trends intelligence platform LS:N Global and get unlimited access to a hive of insights - from microtrends and macrotrends to market reports, daily news, research across eight industry sectors.