While some sceptics might say that AI will eventually become so complex that humans will not even be able to understand it – some argue that we are already here. This adds to the argument that human oversight is necessary for the future of ethical AI.
Suppose the trajectory of AI developments is heading for a future when our understanding of the technology needs to catch up. In that case, it is imperative that we must start prioritising the development of human-led safeguards, standards and assessments.
This is not a novel concept. Across AI domains, from generative AI to military uses of AI, there have been calls for the role of humans in detecting risks, harms and gaps in robustness. In February 2023, for example, the US State Department released the Political Declaration on Responsible Military Use of Artificial Intelligence and Autonomy. The declaration, signed by 63 countries, including the US, China and the UK, puts forth a list of 12 best practices for responsible AI in the military domain, four of which relate directly to the role of humans in responsible AI: human control, human oversight, human judgement and human training.
In July 2023, OpenAI, Anthropic, Microsoft and Google came together to launch The Frontier Model Forum, an industry body overseeing the safe development of the most advanced AI models. ‘Companies creating AI technology have a responsibility to ensure that it is safe, secure and remains under human control,’ Brad Smith, the president of Microsoft, said in a statement. ‘This initiative is a vital step to bring the tech sector together in advancing AI responsibly and tackling the challenges so that it benefits all of humanity.’
So how exactly do we operationalise human oversight to serve our future interests?
London-based software platform Holistic AI has been a pioneer in this field since 2020 by promoting the essentiality of establishing governance frameworks for AI within organisations and enterprises to enable AI systems that are explainable, robust, unbiased and not susceptible to privacy breaches. Critical metrics of the robustness and safety of AI systems lies in how transparent their deployment is. This includes people interacting with AI systems being told about publicly available information on how a system works. This is not possible without human oversight.
Sign up to our trends intelligence platform LS:N Global and get unlimited access to a hive of insights - from microtrends and macrotrends to market reports, daily news, research across eight industry sectors.