Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more.
Today we’re joined by Riley Goodside, staff prompt engineer at Scale AI. In our conversation with Riley, we explore LLM capabilities and limitations, prompt engineering, and the mental models required to apply advanced prompting techniques. We dive deep into understanding LLM behavior, discussing the mechanism of autoregressive inference, comparing k-shot and zero-shot prompting, and dissecting the impact of RLHF. We also discuss the idea that prompting is a scaffolding structure that leverages the model context, resulting in achieving the desired model behavior and response rather than focusing solely on writing ability.
The complete show notes for this episode can be found at twimlai.com/go/652.
Today we’re joined by Sara Hooker, director at Cohere and head of Cohere For AI, Cohere’s research lab. In our conversation with Sara, we explore some of the challenges with multilingual models like poor data quality and tokenization, and how they rely on data augmentation and preference training to address these bottlenecks. We also discuss the disadvantages and the motivating factors behind the Mixture of Experts technique, and the importance of common language between ML researchers and hardware architects to address the pain points in frameworks and create a better cohesion between the distinct communities. Sara also highlights the impact and the emotional connection that language models have created in society, the benefits and the current safety concerns of universal models, and the significance of having grounded conversations to characterize and mitigate the risk and development of AI models. Along the way, we also dive deep into Cohere and Cohere for AI, along with their Aya project, an open science project that aims to build a state-of-the-art multilingual generative language model as well as some of their recent research papers.
The complete show notes for this episode can be found at twimlai.com/go/651.
Today we’re joined by Luke Zettlemoyer, professor at University of Washington and a research manager at Meta. In our conversation with Luke, we cover multimodal generative AI, the effect of data on models, and the significance of open source and open science. We explore the grounding problem, the need for visual grounding and embodiment in text-based models, the advantages of discretization tokenization in image generation, and his paper Scaling Laws for Generative Mixed-Modal Language Models, which focuses on simultaneously training LLMs on various modalities. Additionally, we cover his papers on Self-Alignment with Instruction Backtranslation, and LIMA: Less Is More for Alignment.
The complete show notes for this episode can be found at twimlai.com/go/650.
Today we’re joined by Alex Hanna, the Director of Research at the Distributed AI Research Institute (DAIR). In our conversation with Alex, we discuss the topic of AI hype and the importance of tackling the issues and impacts it has on society. Alex highlights how the hype cycle started, concerning use cases, incentives driving people towards the rapid commercialization of AI tools, and the need for robust evaluation tools and frameworks to assess and mitigate the risks of these technologies. We also talked about DAIR and how they’ve crafted their research agenda. We discuss current research projects like DAIR Fellow Asmelash Teka Hadgu’s research supporting machine translation and speech recognition tools for the low-resource Amharic and Tigrinya languages of Ethiopia and Eritrea, in partnership with his startup Lesan.AI. We also explore the “Do Data Sets Have Politics” paper, which focuses on coding various variables and conducting a qualitative analysis of computer vision data sets to uncover the inherent politics present in data sets and the challenges in data set creation.
The complete show notes for this episode can be found at twimlai.com/go/649.
Today we’re joined by Nataniel Ruiz, a research scientist at Google. In our conversation with Nataniel, we discuss his recent work around personalization for text-to-image AI models. Specifically, we dig into DreamBooth, an algorithm that enables “subject-driven generation,” that is, the creation of personalized generative models using a small set of user-provided images about a subject. The personalized models can then be used to generate the subject in various contexts using a text prompt. Nataniel gives us a dive deep into the fine-tuning approach used in DreamBooth, the potential reasons behind the algorithm’s effectiveness, the challenges of fine-tuning diffusion models in this way, such as language drift, and how the prior preservation loss technique avoids this setback, as well as the evaluation challenges and metrics used in DreamBooth. We also touched base on his other recent papers including SuTI, StyleDrop, HyperDreamBooth, and lastly, Platypus.
The complete show notes for this episode can be found at twimlai.com/go/648.
Today we’re joined by Shreya Rajpal, founder and CEO of Guardrails AI. In our conversation with Shreya, we discuss ensuring the safety and reliability of language models for production applications. We explore the risks and challenges associated with these models, including different types of hallucinations and other LLM failure modes. We also talk about the susceptibility of the popular retrieval augmented generation (RAG) technique to closed-domain hallucination, and how this challenge can be addressed. We also cover the need for robust evaluation metrics and tooling for building with large language models. Lastly, we explore Guardrails, an open-source project that provides a catalog of validators that run on top of language models to enforce correctness and reliability efficiently.
The complete show notes for this episode can be found at twimlai.com/go/647.
Today we’re joined by Roland Memisevic, a Senior Director at Qualcomm AI Research. In our conversation with Roland, we discuss the significance of language in humanlike AI systems and the advantages and limitations of autoregressive models like Transformers in building them. We cover the current and future role of recurrence in LLM reasoning and the significance of improving grounding in AI—including the potential of developing a sense of self in agents. Along the way, we discuss Fitness Ally, a fitness coach trained on a visually grounded large language model, which has served as a platform for Roland’s research into neural reasoning, as well as recent research that explores topics like visual grounding for large language models and state-augmented architectures for AI agents.
The complete show notes for this episode can be found at twimlai.com/go/646.
Today we’re joined by James Zou, an assistant professor at Stanford University. In our conversation with James, we explore the differences in ChatGPT’s behavior over the last few months. We discuss the issues that can arise from inconsistencies in generative AI models, how he tested ChatGPT’s performance in various tasks, drawing comparisons between March 2023 and June 2023 for both GPT-3.5 and GPT-4 versions, and the possible reasons behind the declining performance of these models. James also shared his thoughts on how CRISPR could potentially revolutionize LLM and AI systems, and how adding monitoring tools can help in tracking behavioral changes in these models. Finally, we discuss James' recent paper on pathology image analysis using Twitter data, in which he explores the challenges of obtaining large medical datasets and data collection, as well as detailing the model’s architecture, training, and the evaluation process.
The complete show notes for this episode can be found at twimlai.com/go/645.
Today we’re joined by Sophia Sanborn, a postdoctoral scholar at the University of California, Santa Barbara. In our conversation with Sophia, we explore the concept of universality between neural representations and deep neural networks, and how these principles of efficiency provide an ability to find consistent features across networks and tasks. We also discuss her recent paper on Bispectral Neural Networks which focuses on Fourier transform and its relation to group theory, the implementation of bi-spectral spectrum in achieving invariance in deep neural networks, the expansion of geometric deep learning on the concept of CNNs from other domains, the similarities in the fundamental structure of artificial neural networks and biological neural networks and how applying similar constraints leads to the convergence of their solutions.
The complete show notes for this episode can be found at twimlai.com/go/644.
Today we’re joined by Gokul Swamy, a Ph.D. Student at the Robotics Institute at Carnegie Mellon University. In the final conversation of our ICML 2023 series, we sat down with Gokul to discuss his accepted papers at the event, leading off with “Inverse Reinforcement Learning without Reinforcement Learning.” In this paper, Gokul explores the challenges and benefits of inverse reinforcement learning, and the potential and advantages it holds for various applications. Next up, we explore the “Complementing a Policy with a Different Observation Space” paper which applies causal inference techniques to accurately estimate sampling balance and make decisions based on limited observed features. Finally, we touched on “Learning Shared Safety Constraints from Multi-task Demonstrations” which centers on learning safety constraints from demonstrations using the inverse reinforcement learning approach.
The complete show notes for this episode can be found at twimlai.com/go/643.
Amanda is a wife. A mother. A blogger. A Christian.
A charming, beautiful, bubbly, young woman who lives life to the fullest.
But Amanda is dying, with a secret she doesn’t want anyone to know.
She starts a blog detailing her cancer journey, and becomes an inspiration, touching and
captivating her local community as well as followers all over the world.
Until one day investigative producer Nancy gets an anonymous tip telling her to look at Amanda’s
blog, setting Nancy on an unimaginable road to uncover Amanda’s secret.
Award winning journalist Charlie Webster explores this unbelievable and bizarre, but
all-too-real tale, of a woman from San Jose, California whose secret ripped a family apart and
left a community in shock.
Scamanda is the true story of a woman whose own words held the key to her secret.
New episodes every Monday.
Follow Scamanda on Apple Podcasts, Spotify, or wherever you listen.
Amanda’s blog posts are read by actor Kendall Horn.