The year 2022 marked a significant milestone in Artificial Intelligence (AI). ChatGPT released an AI hurricane on the world, accelerating the AI arms race between technology giants like Microsoft and Google. Next to the enormous potential AI can bring, people, governments, and companies are increasingly concerned [1] regarding the potential (mis)use of AI which may harm persons’ civil liberties, rights, and physical or psychological safety. Such (mis)use can hurt group discrimination, democratic participation, or educational access. The Dutch Security Intelligence Agency (AIVD) expresses similar concerns regarding the potential risks associated with AI technology. In a recent report, the AIVD calls for extra attention to be paid to the development of AI technology, warning that the rapid advancement of AI could pose a severe threat. [2] Italian regulators even went as far as to ban ChatGPT for all users within Italy (OpenAI had to block users from Italy) [3] Microsoft has launched initiatives to promote and standardize responsible AI amidst the growing concerns over AI ethics. [4] Can we rely on Big Tech guidance in an area with such a significant commercial interest? Ever since the introduction of the AI act and more specifically to Article 9 of the EU AI ACT that mandates risk management for high-risk[1] there is more attention for conducting risk assessments. Brecker et al. stated, “Policymakers need to provide guidelines as every day without clearer regulations means uncertainty for market participants and potentially ethically and legally undesirable AI solutions for society.” They referred to earlier concerns about “un-black boxing” BigTech. [5] Un-blacking boxing is needed to enable fact-based and transparent decision-making for managers. [6]
Full Article