Sociotechnical & Safety-by-Design AI Experts for Lloyd's Register Foresight
Is the safe adoption of AI possible? If you've been following the AI news over the last week, it's pretty clear that voluntary guardrails, self-regulation and international regulatory alignment can't be relied up on to produce safer outcomes, so what are the alternatives?
For the Lloyd's Register Foundation Foresight Review on the safe adoption of AI, we're investigating alternative routes to safe adoption, including sociotechnical methods of assurance and safety-by-design approaches to AI development. As these are both emergent areas, I'm really keen to speak with people working in both of these areas, particularly if your work is early stage and not easily discoverable.
If that's you, please do get in touch! Would love to have a greater sense of works in progress and the kinds of challenges you're facing getting this work off the ground.
And - while I'm here - I'll (unusually for me) be in Brussels for couple of days in mid-March and very keen to get some conversations in the diary.
Lloyd's Register Foundation
Brought to you by Sourcee
We find journo requests from across the web and deliver them directly to your inbox.