There have been some interesting developments around the ethics and governance of artificial intelligence (AI) in recent days. First we read that Google’s DeepMind has set up an Ethics and Society research unit, with the rationale that “AI can be of extraordinary benefit to the world, but only if held to the highest ethical standards. Technology is not value neutral, and technologists must take responsibility for the ethical and social impact of their work. …. We are committed to deep research into ethical and social questions, the inclusion of many voices, and ongoing critical reflection.” The unit has a number of Fellows (‘independent advisors’), including Oxford’s Nick Bostrom, to “help provide oversight, critical feedback and guidance for our research strategy and work program”.