The AI Strategist - 09.06.23
Global Warning
In the news - UK to host first global summit on Artificial Intelligence - the UK is leading the charge on AI Safety with the Prime Minister announcing ‘the first major global summit on AI safety.’ Post-Brexit, the UK claim to be in a more nimble position when it comes to AI regulation.
AI Assurance was explained in the 2021 National AI Strategy as ‘a number of governance mechanisms for third parties to develop trust in the compliance and risk of a system or organisation.’ Essentially guiding how AI systems can be safe, fair and trusted. In March this year the UK introduced a policy paper on a ‘pro-innovation approach to regulation.’
This week the Centre for Data Ethics and Innovation published 14 case studies on ‘AI Assurance techniques’ to help guide organisations.
Why this matters - it is encouraging to see politicians moving from rhetoric to action. Back in 2019, I attended the Responsible AI forum and recall Demis Hassabis imagining a utopian future of a co-ordinated global AI regulator employing thousands of data scientists. It is therefore notable that the UK are stressing the importance of ‘working to develop an international framework.’
Another positive is the potential for this to alleviate the jurisdictional headache for any global business keeping an eye on emerging AI Safety regulation across the EU, US & UK etc. I have felt the pain of this first hand in writing the Safety & Ethics strategy for CHAPTR.
Hopefully this is the start of politicians and regulators shaping and controlling the AI Safety & Ethics narrative. If you are a little bit sceptical and worried about this then you should be. Gary Marcus provides a good summary as to why. So much talk about transparency at a time when we don’t now how LLMs work and what data they are trained on.
It is time for politicians to show resolve and seek balanced views in developing regulation. It is also important to learn the algorithmic lessons of the past and think carefully about big tech incentives.
2. Follow your own path
In the news - Google has launched a new set of Generative AI training content - at no cost!
Why this matters - a lot of the recent media coverage of Generative AI plays on fear. Fear often stems from a lack of understanding. It feels very logical that employees (regardless of job function or seniority) will want to educate themselves in this area. It feels somewhat ‘web3’ for Google to attempt to democratise these skills.
Due to the excitement, prevalence and very obvious potential of Generative AI, we will need to see a major re-skilling across the economy. The skills area I am most passionate is what we referred to as ‘AI Translators’ in my previous company. In simple terms, non-technical people who work in various ways to identify, scope and deliver AI projects. Communicating to organisations exactly what this role is can be a challenge. It is therefore fantastic to see Google formalising and structuring this content as educational ‘journeys'.
Generative AI learning path (10 days) - completing this course during ‘work hours’ would represent 4% of a working year. In simple terms, modules 1-4 are for a non-technical audience and 5-10 are for data scientists, MLEs and other more technical roles.
Having worked in AI since 2019, I have kept an eye on skills development for the non-technical. This course (self-funded) cost £2,200 and was worth every penny. It wasn’t AI-focused, but helped me think through the necessary innovation and business model disruption in relation to AI/Generative AI. I now see more and more AI courses - at very different price points.
3. AI is eating the will save the world
In the news - in a positive end to the newsletter, Marc Andreessen pens a typically philosophical piece. In a welcome respite to the current onslaught of doom-mongering, we are not only reminded that this is nothing new but encouraged that AI could actually make everything we care about better.
The article tackles five AI risks (it will kills us, ruin society, take our jobs, exacerbate inequality and make people do bad things) head-on and suggests that the biggest risk, China, not only has a vastly different vision to the West but has global AI aspirations. Which, incidentally, reminded me of this great book.
Adreessen’s proposed solution is to accelerate Generative AI development as much as possible. A bit of a head scratcher when much of the community wants a pause. This (both politically and economically) begins to help us understand why the UK want to be seen as pro-innovation. The genie is out of the bottle, very well funded VC-backed startups are already building at pace. Countries at the forefront of regulation (with teeth) will attract further investment and talent.
Why this matters - Andreessen makes the argument that startups ‘should be allowed to build AI as fast and aggressively as they can.’ I can tell you first-hand that building a Generative AI storytelling app for children safely and ethically is FAR from straightforward. Many Generative AI apps should be built and launched with caution and experimentation. For many uses cases, I question whether this is happening. This is a choice for any founder - ethically what is the right thing to do, how fast do I need to build and launch based on anticipated competition and how much money do I have available to invest in AI safety and ethics?
I entirely agree with the point that the current media coverage is guiding us in the wrong direction. Generative AI promises unimaginable economic potential. Much more needs to be done to explain the input data of foundation models. When it comes to explaining how these models work I think it is more nuanced. I expect that 80% of the effort should go into the experimentation and proof that the safety guardrails around foundation model apps work. 20% of the effort on helping the average consumer understand how the underlying model works.
It is notable that ‘AI augmentation’ is prominent in the article. I am seeing a lot of commentary on this. Generative models will be used to infuse generalised knowledge into other ‘traditional’ smaller ML models to solve more specific tasks. They can be thought of as components of any software system.
Fascinating to see the prediction that ‘Every child will have an AI tutor that is infinitely patient, infinitely compassionate, infinitely knowledgeable, infinitely helpful.’ I came across this mind-blowing article this week. Anyone working in edTech or education should start thinking about the potential of ‘thought and response’ chains. It is a very exciting near-term product opportunity.