Refine
Clear All
Your Track:
Live:
Search in:
Philosophical Disquisitions
   Philosophical Disquisitions

Philosophical Disquisitions

Things hid and barr'd from common sense

Available Episodes 10


In this episode, John and Sven discuss risk and technology ethics. They focus, in particular, on the perennially popular and widely discussed problems of value alignment (how to get technology to align with our values) and control (making sure technology doesn't do something terrible). They start the conversation with the famous case study of Stanislov Petrov and the prevention of nuclear war.

You can listen below or download the episode here. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services.


Recommendations for further reading

Discount

You can purchase a 20% discounted copy of This is Technology Ethics by using the code TEC20 at the publisher's website.


In this episode, John and Sven discuss the methods of technology ethics. What exactly is it that technology ethicists do? How can they answer the core questions about the value of technology and our moral response to it? Should they consult their intuitions? Run experiments? Use formal theories? The possible answers to these questions are considered with a specific case study on the ethics of self-driving cars.

You can listen below or download the episode here. You can also subscribe to the podcast on Apple, Spotify, Google, Amazon and a range of other podcasting services.


Recommended Reading

Discount

You can purchase a 20% discounted copy of This is Technology Ethics by using the code TEC20 at the publisher's website.



I am very excited to announce the launch of a new podcast series with my longtime friend and collaborator Sven Nyholm. The podcast is intended to introduce key themes, concepts, arguments and ideas arising from the ethics of technology. It roughly follows the structure from the book This is Technology Ethics by Sven , but in a loose and conversational style. In the nine episodes, we will cover the nature of technology and ethics, the methods of technology ethics, the problems of control, responsibility, agency and behaviour change that are central to many contemporary debates about the ethics of technology. We will also cover perennially popular topics such as whether a machine could have moral status, whether a robot could (or should) be a friend, lover or work colleague, and the desirability of merging with machines. The podcast is intended to be accessible to a wide audience and could provide an ideal companion to an introductory or advanced course in the ethics of technology (with particular focus on AI, robotics and other digital technologies). I will be releasing the podcast on the Philosophical Disquisitions podcast feed, but I have also created an independent podcast feed and website, if you are just interested in it. The first episode can be downloaded here or you can listen below. You can also subscribe on Apple, Spotify, Amazon and a range of other podcasting services. If you go the website or subscribe via the standalone feed, you can download the first two episodes now. There is also a promotional tie with the book publisher. If you use the code 'TEC20' on the publisher's website (here) you can get 20% off the regular price.  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

In this episode, I chat to Matthijs Maas about pausing AI development. Matthijs is currently a Senior Research Fellow at the Legal Priorities Project and a Research Affiliate at the Centre for the Study of Existential Risk at the University of Cambridge. In our conversation, we focus on the possibility of slowing down or limiting the development of technology. Many people are sceptical of this possibility but Matthijs has been doing some extensive research of historical case studies of, apparently successful, technological slowdown. We discuss these case studies in some detail. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. Relevant LinksRecording of Matthijs's Chalmers about this topic: https://www.youtube.com/watch?v=vn4ADfyrJ0Y&t=2s Slides from this talk -- https://drive.google.com/file/d/1J9RW49IgSAnaBHr3-lJG9ZOi8ZsOuEhi/view?usp=share_linkPrevious essay / primer, laying out the basics of the argument:  https://verfassungsblog.de/paths-untaken/Incomplete longlist database of candidate case studies: https://airtable.com/shrVHVYqGnmAyEGsz #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

In this episode of the podcast I chat to Atoosa Kasirzadeh. Atoosa is an Assistant Professor/Chancellor's fellow at the University of Edinburgh. She is also the Director of Research at the Centre for Technomoral Futures at Edinburgh. We chat about the alignment problem in AI development, roughly: how do we ensure that AI acts in a way that is consistent with human values. We focus, in particular, on the alignment problem for language models such as ChatGPT, Bard and Claude, and how some old ideas from the philosophy of language could help us to address this problem. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. Relevant LinksAtoosa's webpageAtoosa's paper (with Iason Gabriel) 'In Conversation with AI: Aligning Language Models with Human Values' #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

[UPDATED WITH CORRECT EPISODE LINK] In this episode I chat to Miles Brundage. Miles leads the policy research team at Open AI. Unsurprisingly, we talk a lot about GPT and generative AI. Our conservation covers the risks that arise from their use, their speed of development, how they should be regulated, the harms they may cause and the opportunities they create. We also talk a bit about what it is like working at OpenAI and why Miles made the transition from academia to industry (sort of). Lots of useful insight in this episode from someone at the coalface of AI development. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

In this episode of the podcast I chat to Jess Morley. Jess is currently a DPhil candidate at the Oxford Internet Institute. Her research focuses on the use of data in healthcare, oftentimes on the impact of big data and AI, but, as she puts it herself, usually on 'less whizzy' things. Sadly, our conversation focuses on the whizzy things, in particular the recent hype about large language models and their potential to disrupt the way in which healthcare is managed and delivered. Jess is sceptical about the immediate potential for disruption but thinks it is worth exploring, carefully, the use of this technology in healthcare. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. Relevant LinksJess's WebsiteJess on TwitterJohn Snow's cholera map  #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

In this episode, I chat to Robert Long about AI sentience. Robert is a philosopher that works on issues related to the philosopy of mind, cognitive science and AI ethics. He is currently a philosophy fellow at the Centre for AI Safety in San Francisco. He completed his PhD at New York University. We do a deep dive on the concept of sentience, why it is important, and how we can tell whether an animal or AI is sentient. We also discuss whether it is worth taking the topic of AI sentience seriously. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Relevant LinksRobert's webpageRobert's substack Subscribe to the newsletter

In this episode of the podcast, I talk to Thore Husfeldt about the impact of GPT on education. Thore is a Professor of Computer Science at the IT University of Copehagen, where he specialises in pretty technical algorithm-related research. He is also affiliated with Lund University in Sweden. Beyond his technical work, Thore is interested in ideas at the intersection of computer science, philosophy and educational theory. In our conversation, Thore outlines four models of what a university education is for, and considers how GPT disrupts these models. We then talk, in particular, about the 'signalling' theory of higher education and how technologies like GPT undercut the value of certain signals, and thereby undercut some forms of assessment. Since I am an educator, I really enjoyed this conversation, but I firmly believe there is much food for thought in it for everyone. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter

In this episode of the podcast, I chat to Anton Korinek about the economic impacts of GPT. Anton is a Professor of Economics at the University of Virginia and the Economics Lead at the Centre for AI Governance. He has researched widely on the topic of automation and labour markets. We talk about whether GPT will substitute for or complement human workers; the disruptive impact of GPT on the economic organisation; the jobs/roles most immediately at risk; the impact of GPT on wage levels; the skills needed to survive in an AI-enhanced economy, and much more. You can download the episode here or listen below. You can also subscribe the podcast on Apple, Spotify, Google, Amazon or whatever your preferred service might be. Relevant LinksAnton's homepageAnton's paper outlining 25 uses of LLMs for academic economistsAnton's dialogue with GPT, Claude and the economic David Autor #mc_embed_signup{background:#fff; clear:left; font:14px Helvetica,Arial,sans-serif; } /* Add your own MailChimp form style overrides in your site stylesheet or in this style block. We recommend moving this block and the preceding CSS link to the HEAD of your HTML file. */ Subscribe to the newsletter