iVoox Podcast & radio
Descargar app gratis

Lukas Biewald on Gradient Dissent, an AI-focused podcast brought to you by Weights & Biases. Dive into fascinating conversations with industry giants from NVIDIA, Meta, Google, Lyft, OpenAI, and more. Explore the cutting-edge of AI and learn the intricacies of bringing models into production.
Deploying Autonomous Mobile Robots with Jean Marc Alkazzi at idealworks
Episodio en Gradient Dissent - Weights & Biases
On this episode, we’re ed by Jean Marc Alkazzi, Applied AI at idealworks. Jean focuses his attention on applied AI, leveraging the use of autonomous mobile robots (AMRs) to improve efficiency within factories and more. We discuss: - Use cases for autonomous mobile robots (AMRs) and how to manage a fleet of them. - How AMRs interact with humans working in warehouses. - The challenges of building and deploying autonomous robots. - Computer vision vs. other types of localization technology for robots. - The purpose and types of simulation environments for robotic testing. - The importance of aligning a robotic fleet’s workflow with concrete business objectives. - What the update process looks like for robots. - The importance of avoiding your own biases when developing and testing AMRs. - The challenges associated with troubleshooting ML systems. Resources: Jean Marc Alkazzi - https://www.linkedin.com/in/jeanmarcjeanazzi/ idealworks |LinkedIn - https://www.linkedin.com/company/idealworks-gmbh/ idealworks | Website - https://idealworks.com/ Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation. #OCR #DeepLearning #AI #Modeling #ML
58:05
How EleutherAI Trains and Releases LLMs: Interview with Stella Biderman
Episodio en Gradient Dissent - Weights & Biases
On this episode, we’re ed by Stella Biderman, Executive Director at EleutherAI and Lead Scientist - Mathematician at Booz Allen Hamilton. EleutherAI is a grassroots collective that enables open-source AI research and focuses on the development and interpretability of large language models (LLMs). We discuss: - How EleutherAI got its start and where it's headed. - The similarities and differences between various LLMs. - How to decide which model to use for your desired outcome. - The benefits and challenges of reinforcement learning from human . - Details around pre-training and fine-tuning LLMs. - Which types of GPUs are best when training LLMs. - What separates EleutherAI from other companies training LLMs. - Details around mechanistic interpretability. - Why understanding what and how LLMs memorize is important. - The importance of giving researchers and the public access to LLMs. Stella Biderman - https://www.linkedin.com/in/stellabiderman/ EleutherAI - https://www.linkedin.com/company/eleutherai/ Resources: - https://www.eleuther.ai/ Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation. #OCR #DeepLearning #AI #Modeling #ML
57:15
Scaling LLMs and Accelerating Adoption with Aidan Gomez at Cohere
Episodio en Gradient Dissent - Weights & Biases
On this episode, we’re ed by Aidan Gomez, Co-Founder and CEO at Cohere. Cohere develops and releases a range of innovative AI-powered tools and solutions for a variety of NLP use cases. We discuss: - What “attention” means in the context of ML. - Aidan’s role in the “Attention Is All You Need” paper. - What state-space models (SSMs) are, and how they could be an alternative to transformers. - What it means for an ML architecture to saturate compute. - Details around data constraints for when LLMs scale. - Challenges of measuring LLM performance. - How Cohere is positioned within the LLM development space. - Insights around scaling down an LLM into a more domain-specific one. - Concerns around synthetic content and AI changing public discourse. - The importance of raising money at healthy milestones for AI development. Aidan Gomez - https://www.linkedin.com/in/aidangomez/ Cohere - https://www.linkedin.com/company/cohere-ai/ Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation. Resources: - https://cohere.ai/ - “Attention Is All You Need” #OCR #DeepLearning #AI #Modeling #ML
51:30
Neural Network Pruning and Training with Jonathan Frankle at MosaicML
Episodio en Gradient Dissent - Weights & Biases
Jonathan Frankle, Chief Scientist at MosaicML and Assistant Professor of Computer Science at Harvard University, s us on this episode. With comprehensive infrastructure and software tools, MosaicML aims to help businesses train complex machine-learning models using their own proprietary data. We discuss: - Details of Jonathan’s Ph.D. dissertation which explores his “Lottery Ticket Hypothesis.” - The role of neural network pruning and how it impacts the performance of ML models. - Why transformers will be the go-to way to train NLP models for the foreseeable future. - Why the process of speeding up neural net learning is both scientific and artisanal. - What MosiacML does, and how it approaches working with clients. - The challenges for developing AGI. - Details around ML training policy and ethics. - Why data brings the magic to customized ML models. - The many use cases for companies looking to build customized AI models. Jonathan Frankle - https://www.linkedin.com/in/jfrankle/ Resources: - https://mosaicml.com/ - The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks Thanks for listening to the Gradient Dissent podcast, brought to you by Weights & Biases. If you enjoyed this episode, please leave a review to help get the word out about the show. And be sure to subscribe so you never miss another insightful conversation. #OCR #DeepLearning #AI #Modeling #ML
01:02:00
Shreya Shankar — Operationalizing Machine Learning
Episodio en Gradient Dissent - Weights & Biases
About This Episode Shreya Shankar is a computer scientist, PhD student in databases at UC Berkeley, and co-author of "Operationalizing Machine Learning: An Interview Study", an ethnographic interview study with 18 machine learning engineers across a variety of industries on their experience deploying and maintaining ML pipelines in production. Shreya explains the high-level findings of "Operationalizing Machine Learning"; variables that indicate a successful deployment (velocity, validation, and versioning), common pain points, and a grouping of the MLOps tool stack into four layers. Shreya and Lukas also discuss examples of data challenges in production, Jupyter Notebooks, and reproducibility. Show notes (transcript and links): http://wandb.me/gd-shreya --- 💬 *Host:* Lukas Biewald --- *Subscribe and listen to Gradient Dissent today!* 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
54:38
Jasper AI's Dave Rogenmoser & Saad Ansari on Growing & Maintaining an LLM-Based Company
Episodio en Gradient Dissent - Weights & Biases
About this episodeIn this episode of Gradient Dissent, Lukas interviews Dave Rogenmoser (CEO & Co-Founder) and Saad Ansari (Director of AI) of Jasper AI, a generative AI company with a focus on text generation for content like blog posts, articles, and more. The company has seen impressive growth since it's launch at the start of 2021. Lukas talks with Dave and Saad about how Jasper AI was able to sell the capabilities of large language models as a product so successfully, and how they are able to continually improve their product and take advantage of steps forward in the AI industry at large. They also speak on how they keep their business ahead of the competition, where they put their focus on in of R&D, and how they are able to keep the insights they've learned over the years relevant at all times as their company grows in employee count and company value. Other topics include the potential use of generative AI in domains it hasn't necessarily seen yet, as well as the impact that community and plays on the constant tweaking and tuning processes that machine learning models go through. Connect with Dave & Saad:Find Dave on Twitter and LinkedIn. Find Saad on LinkedIn. --- 💬 Host: Lukas Biewald --- Subscribe and listen to Gradient Dissent today! 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
01:09:15
Sarah Catanzaro — ing the Lessons of the Last AI Renaissance
Episodio en Gradient Dissent - Weights & Biases
Sarah Catanzaro is a General Partner at Amplify Partners, and one of the leading investors in AI and ML. Her investments include RunwayML, OctoML, and Gantry. Sarah and Lukas discuss lessons learned from the "AI renaissance" of the mid 2010s and compare the general perception of ML back then to now. Sarah also provides insights from her perspective as an investor, from selling into tech-forward companies vs. traditional enterprises, to the current state of MLOps/developer tools, to large language models and hype bubbles. Show notes (transcript and links): http://wandb.me/gd-sarah-catanzaro --- ⏳ Timestamps: 0:00 Intro 1:10 Lessons learned from previous AI hype cycles 11:46 Maintaining technical knowledge as an investor 19:05 Selling into tech-forward companies vs. traditional enterprises 25:09 Building point solutions vs. end-to-end platforms 36:27 LLMS, new tooling, and commoditization 44:39 Failing fast and how startups can compete with large cloud vendors 52:31 The gap between research and industry, and vice versa 1:00:01 Advice for ML practitioners during hype bubbles 1:03:17 Sarah's thoughts on Rust and bottlenecks in deployment 1:11:23 The importance of aligning technology with people 1:15:58 Outro --- 📝 Links 📍 "Operationalizing Machine Learning: An Interview Study" (Shankar et al., 2022), an interview study on deploying and maintaining ML production pipelines: https://arxiv.org/abs/2209.09125 --- Connect with Sarah: 📍 Sarah on Twitter: https://twitter.com/sarahcat21 📍 Sarah's Amplify Partners profile: https://www.amplifypartners.com/investment-team/sarah-catanzaro --- 💬 Host: Lukas Biewald 📹 Producers: Riley Fields, Angelica Pan --- Subscribe and listen to Gradient Dissent today! 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
01:16:23
Cristóbal Valenzuela — The Next Generation of Content Creation and AI
Episodio en Gradient Dissent - Weights & Biases
Cristóbal Valenzuela is co-founder and CEO of Runway ML, a startup that's building the future of AI-powered content creation tools. Runway's research areas include diffusion systems for image generation. Cris gives a demo of Runway's video editing platform. Then, he shares how his interest in combining technology with creativity led to Runway, and where he thinks the world of computation and content might be headed to next. Cris and Lukas also discuss Runway's tech stack and research. Show notes (transcript and links): http://wandb.me/gd-cristobal-valenzuela --- ⏳ Timestamps: 0:00 Intro 1:06 How Runway uses ML to improve video editing 6:04 A demo of Runway’s video editing capabilities 13:36 How Cris entered the machine learning space 18:55 Cris’ thoughts on the future of ML for creative use cases 28:46 Runway’s tech stack 32:38 Creativity, and keeping humans in the loop 36:15 The potential of audio generation and new mental models 40:01 Outro --- 🎥 Runway's AI Film Festival is accepting submissions through January 23! 🎥 They are looking for art and artists that are at the forefront of AI filmmaking. Submissions should be between 1-10 minutes long, and a core component of the film should include generative content 📍 https://aiff.runwayml.com/ -- 📝 Links 📍 "High-Resolution Image Synthesis with Latent Diffusion Models" (Rombach et al., 2022)", the research paper behind Stable Diffusion: https://research.runwayml.com/publications/high-resolution-image-synthesis-with-latent-diffusion-models 📍 Lexman Artificial, a 100% AI-generated podcast: https://twitter.com/lexman_ai --- Connect with Cris and Runway: 📍 Cris on Twitter: https://twitter.com/c_valenzuelab 📍 Runway on Twitter: https://twitter.com/runwayml 📍 Careers at Runway: https://runwayml.com/careers/ --- 💬 Host: Lukas Biewald 📹 Producers: Riley Fields, Angelica Pan --- Subscribe and listen to Gradient Dissent today! 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
40:26
Jeremy Howard — The Simple but Profound Insight Behind Diffusion
Episodio en Gradient Dissent - Weights & Biases
Jeremy Howard is a co-founder of fast.ai, the non-profit research group behind the popular massive open online course "Practical Deep Learning for Coders", and the open source deep learning library "fastai". Jeremy is also a co-founder of #Masks4All, a global volunteer organization founded in March 2020 that advocated for the public adoption of homemade face masks in order to help slow the spread of COVID-19. His Washington Post article "Simple DIY masks could help flatten the curve." went viral in late March/early April 2020, and is associated with the U.S CDC's change in guidance a few days later to recommend wearing masks in public. In this episode, Jeremy explains how diffusion works and how individuals with limited compute budgets can engage meaningfully with large, state-of-the-art models. Then, as our first-ever repeat guest on Gradient Dissent, Jeremy revisits a previous conversation with Lukas on Python vs. Julia for machine learning. Finally, Jeremy shares his perspective on the early days of COVID-19, and what his experience as one of the earliest and most high-profile advocates for widespread mask-wearing was like. Show notes (transcript and links): http://wandb.me/gd-jeremy-howard-2 --- ⏳ Timestamps: 0:00 Intro 1:06 Diffusion and generative models 14:40 Engaging with large models meaningfully 20:30 Jeremy's thoughts on Stable Diffusion and OpenAI 26:38 Prompt engineering and large language models 32:00 Revisiting Julia vs. Python 40:22 Jeremy's science advocacy during early COVID days 1:01:03 Researching how to improve children's education 1:07:43 The importance of executive buy-in 1:11:34 Outro 1:12:02 Bonus: Weights & Biases --- 📝 Links 📍 Jeremy's previous Gradient Dissent episode (8/25/2022): http://wandb.me/gd-jeremy-howard 📍 "Simple DIY masks could help flatten the curve. We should all wear them in public.", Jeremy's viral Washington Post article: https://www.washingtonpost.com/outlook/2020/03/28/masks-all-coronavirus/ 📍 "An evidence review of face masks against COVID-19" (Howard et al., 2021), one of the first peer-reviewed papers on the effectiveness of wearing masks: https://www.pnas.org/doi/10.1073/pnas.2014564118 📍 Jeremy's Twitter thread summary of "An evidence review of face masks against COVID-19": https://twitter.com/jeremyphoward/status/1348771993949151232 📍 Read more about Jeremy's mask-wearing advocacy: https://www.smh.com.au/world/north-america/australian-expat-s-push-for-universal-mask-wearing-catches-fire-in-the-us-20200401-p54fu2.html --- Connect with Jeremy and fast.ai: 📍 Jeremy on Twitter: https://twitter.com/jeremyphoward 📍 fast.ai on Twitter: https://twitter.com/FastDotAI 📍 Jeremy on LinkedIn: https://www.linkedin.com/in/howardjeremy/ --- 💬 Host: Lukas Biewald 📹 Producers: Riley Fields, Angelica Pan
01:12:57
Jerome Pesenti — Large Language Models, PyTorch, and Meta
Episodio en Gradient Dissent - Weights & Biases
Jerome Pesenti is the former VP of AI at Meta, a tech conglomerate that includes Facebook, WhatsApp, and Instagram, and one of the most exciting places where AI research is happening today. Jerome shares his thoughts on Transformers-based large language models, and why he's excited by the progress but skeptical of the term "AGI". Then, he discusses some of the practical applications of ML at Meta (recommender systems and moderation!) and dives into the story behind Meta's development of PyTorch. Jerome and Lukas also chat about Jerome's time at IBM Watson and in drug discovery. Show notes (transcript and links): http://wandb.me/gd-jerome-pesenti --- ⏳ Timestamps: 0:00 Intro 0:28 Jerome's thought on large language models 12:53 AI applications and challenges at Meta 18:41 The story behind developing PyTorch 26:40 Jerome's experience at IBM Watson 28:53 Drug discovery, AI, and changing the game 36:10 The potential of education and AI 40:10 Meta and AR/VR interfaces 43:43 Why NVIDIA is such a powerhouse 47:08 Jerome's advice to people starting their careers 48:50 Going back to coding, the challenges of scaling 52:11 Outro --- Connect with Jerome: 📍 Jerome on Twitter: https://twitter.com/an_open_mind 📍 Jerome on LinkedIn: https://www.linkedin.com/in/jpesenti/ --- 💬 Host: Lukas Biewald 📹 Producers: Riley Fields, Angelica Pan, Lavanya Shukla --- Subscribe and listen to our podcast today! 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
52:34
D. Sculley — Technical Debt, Trade-offs, and Kaggle
Episodio en Gradient Dissent - Weights & Biases
D. Sculley is CEO of Kaggle, the beloved and well-known data science and machine learning community. D. discusses his influential 2015 paper "Machine Learning: The High Interest Credit Card of Technical Debt" and what the current challenges of deploying models in the real world are now, in 2022. Then, D. and Lukas chat about why Kaggle is like a rain forest, and about Kaggle's historic, current, and potential future roles in the broader machine learning community. Show notes (transcript and links): http://wandb.me/gd-d-sculley --- ⏳ Timestamps: 0:00 Intro 1:02 Machine learning and technical debt 11:18 MLOps, increased stakes, and realistic expectations 19:12 Evaluating models methodically 25:32 Kaggle's role in the ML world 33:34 Kaggle competitions, datasets, and notebooks 38:49 Why Kaggle is like a rain forest 44:25 Possible future directions for Kaggle 46:50 Healthy competitions and self-growth 48:44 Kaggle's relevance in a compute-heavy future 53:49 AutoML vs. human judgment 56:06 After a model goes into production 1:00:00 Outro --- Connect with D. and Kaggle: 📍 D. on LinkedIn: https://www.linkedin.com/in/d-sculley-90467310/ 📍 Kaggle on Twitter: https://twitter.com/kaggle --- Links: 📍 "Machine Learning: The High Interest Credit Card of Technical Debt" (Sculley et al. 2014): https://research.google/pubs/pub43146/ --- 💬 Host: Lukas Biewald 📹 Producers: Riley Fields, Angelica Pan, Anish Shah, Lavanya Shukla --- Subscribe and listen to our podcast today! 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
01:00:25
Emad Mostaque — Stable Diffusion, Stability AI, and What’s Next
Episodio en Gradient Dissent - Weights & Biases
Emad Mostaque is CEO and co-founder of Stability AI, a startup and network of decentralized developer communities building open AI tools. Stability AI is the company behind Stable Diffusion, the well-known, open source, text-to-image generation model. Emad shares the story and mission behind Stability AI (unlocking humanity's potential with open AI technology), and explains how Stability's role as a community catalyst and compute provider might evolve as the company grows. Then, Emad and Lukas discuss what the future might hold in store: big models vs "optimal" models, better datasets, and more decentralization. - 🎶 Special note: This week’s theme music was composed by Weights & Biases’ own Justin Tenuto with help from Harmonai’s Dance Diffusion. - Show notes (transcript and links): http://wandb.me/gd-emad-mostaque - ⏳ Timestamps: 00:00 Intro 00:42 How AI fits into the safety/security industry 09:33 Event matching and object detection 14:47 Running models on the right hardware 17:46 Scaling model evaluation 23:58 Monitoring and evaluation challenges 26:30 Identifying and sorting issues 30:27 Bridging vision and language domains 39:25 Challenges and promises of natural language technology 41:35 Production environment 43:15 Using synthetic data 49:59 Working with startups 53:55 Multi-task learning, meta-learning, and experience 56:44 Optimization and testing across multiple platforms 59:36 Outro - Connect with Jehan and Motorola Solutions: 📍 Jehan on LinkedIn: https://www.linkedin.com/in/jehanw/ 📍 Jehan on Twitter: https://twitter.com/jehan/ 📍 Motorola Solutions on Twitter: https://twitter.com/MotoSolutions/ 📍 Careers at Motorola Solutions: https://www.motorolasolutions.com/en_us/about/careers.html - 💬 Host: Lukas Biewald 📹 Producers: Riley Fields, Angelica Pan, Lavanya Shukla, Anish Shah - Subscribe and listen to our podcast today! 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
01:10:28
Jehan Wickramasuriya — AI in High-Stress Scenarios
Episodio en Gradient Dissent - Weights & Biases
Jehan Wickramasuriya is the Vice President of AI, Platform & Data Services at Motorola Solutions, a global leader in public safety and enterprise security. In this episode, Jehan discusses how Motorola Solutions uses AI to simplify data streams to help maximize human potential in high-stress situations. He also shares his thoughts on augmenting synthetic data with real data and the challenges posed in partnering with startups. - ⏳ Timestamps: 00:00 Intro 00:42 How AI fits into the safety/security industry 09:33 Event matching and object detection 14:47 Running models on the right hardware 17:46 Scaling model evaluation 23:58 Monitoring and evaluation challenges 26:30 Identifying and sorting issues 30:27 Bridging vision and language domains 39:25 Challenges and promises of natural language technology 41:35 Production environment 43:15 Using synthetic data 49:59 Working with startups 53:55 Multi-task learning, meta-learning, and experience 56:44 Optimization and testing across multiple platforms 59:36 Outro - Connect with Jehan and Motorola Solutions: 📍 Jehan on LinkedIn: https://www.linkedin.com/in/jehanw/ 📍 Jehan on Twitter: https://twitter.com/jehan/ 📍 Motorola Solutions on Twitter: https://twitter.com/MotoSolutions/ 📍 Careers at Motorola Solutions: https://www.motorolasolutions.com/en_us/about/careers.html - 💬 Host: Lukas Biewald 📹 Producers: Riley Fields, Cayla Sharp, Angelica Pan, Lavanya Shukla - Subscribe and listen to our podcast today! 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
01:00:01
Will Falcon — Making Lightning the Apple of ML
Episodio en Gradient Dissent - Weights & Biases
Will Falcon is the CEO and co-founder of Lightning AI, a platform that enables s to quickly build and publish ML models. In this episode, Will explains how Lightning addresses the challenges of a fragmented AI ecosystem and reveals which framework PyTorch Lightning was originally built upon (hint: not PyTorch!) He also shares lessons he took from his experience serving in the military and offers a recommendation to veterans who want to work in tech. --- ⏳ Timestamps: 00:00 Intro 01:00 From SEAL training to FAIR 04:17 Stress-testing Lightning 07:55 Choosing PyTorch over TensorFlow and other frameworks 13:16 Components of the Lightning platform 17:01 Launching Lightning from Facebook 19:09 Similarities between leadership and research 22:08 Lessons from the military 26:56 Scaling PyTorch Lightning to Lightning AI 33:21 Hiring the right people 35:21 The future of Lightning 39:53 Reducing algorithm complexity in self-supervised learning 42:19 A fragmented ML landscape 44:35 Outro --- Connect with Lightning 📍 Website: https://lightning.ai 📍 Twitter: https://twitter.com/LightningAI 📍 LinkedIn: https://www.linkedin.com/company/pytorch-lightning/ 📍 Careers: https://boards.greenhouse.io/lightningai --- 💬 Host: Lukas Biewald 📹 Producers: Riley Fields, Anish Shah, Cayla Sharp, Angelica Pan, Lavanya Shukla --- Subscribe and listen to our podcast today! 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
45:21
Aaron Colak — ML and NLP in Experience Management
Episodio en Gradient Dissent - Weights & Biases
Aaron Colak is the Leader of Core Machine Learning at Qualtrics, an experiment management company that takes large language models and applies them to real-world, B2B use cases. In this episode, Aaron describes mixing classical linguistic analysis with deep learning models and how Qualtrics organized their machine learning organizations and model to leverage the best of these techniques. He also explains how advances in NLP have invited new opportunities in low-resource languages. Show notes (transcript and links): http://wandb.me/gd-aaron-colak --- ⏳ Timestamps: 00:00 Intro 00:57 Evolving from surveys to experience management 04:56 Detecting sentiment with ML 10:57 Working with large language models and rule-based systems 14:50 Zero-shot learning, NLP, and low-resource languages 20:11 Letting customers control data 25:13 Deep learning and tabular data 28:40 Hyperscalers and performance monitoring 34:54 Combining deep learning with linguistics 40:03 A sense of accomplishment 42:52 Causality and observational data in healthcare 45:09 Challenges of interdisciplinary collaboration 49:27 Outro --- Connect with Aaron and Qualtrics 📍 Aaron on LinkedIn: https://www.linkedin.com/in/aaron-r-colak-3522308/ 📍 Qualtrics on Twitter: https://twitter.com/qualtrics/ 📍 Careers at Qualtrics: https://www.qualtrics.com/careers/ --- 💬 Host: Lukas Biewald 📹 Producers: Riley Fields, Cayla Sharp, Angelica Pan, Lavanya Shukla --- Subscribe and listen to our podcast today! 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
50:00
Jordan Fisher — Skipping the Line with Autonomous Checkout
Episodio en Gradient Dissent - Weights & Biases
Jordan Fisher is the CEO and co-founder of Standard AI, an autonomous checkout company that’s pushing the boundaries of computer vision. In this episode, Jordan discusses “the Wild West” of the MLOps stack and tells Lukas why Rust beats Python. He also explains why AutoML shouldn't be overlooked and uses a bag of chips to help explain the Manifold Hypothesis. Show notes (transcript and links): http://wandb.me/gd-jordan-fisher --- ⏳ Timestamps: 00:00 Intro 00:40 The origins of Standard AI 08:30 Getting Standard into stores 18:00 Supervised learning, the advent of synthetic data, and the manifold hypothesis 24:23 What's important in a MLOps stack 27:32 The merits of AutoML 30:00 Deep learning frameworks 33:02 Python versus Rust 39:32 Raw camera data versus video 42:47 The future of autonomous checkout 48:02 Sharing the StandardSim data set 52:30 Picking the right tools 54:30 Overcoming dynamic data set challenges 57:35 Outro --- Connect with Jordan and Standard AI 📍 Jordan on LinkedIn: https://www.linkedin.com/in/jordan-fisher-81145025/ 📍 Standard AI on Twitter: https://twitter.com/StandardAi 📍 Careers at Standard AI: https://careers.standard.ai/ --- 💬 Host: Lukas Biewald 📹 Producers: Riley Fields, Cayla Sharp, Angelica Pan, Lavanya Shukla --- Subscribe and listen to our podcast today! 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
57:57
Drago Anguelov — Robustness, Safety, and Scalability at Waymo
Episodio en Gradient Dissent - Weights & Biases
Drago Anguelov is a Distinguished Scientist and Head of Research at Waymo, an autonomous driving technology company and subsidiary of Alphabet Inc. We begin by discussing Drago's work on the original Inception architecture, winner of the 2014 ImageNet challenge and introduction of the inception module. Then, we explore milestones and current trends in autonomous driving, from Waymo's release of the Open Dataset to the trade-offs between modular and end-to-end systems. Drago also shares his thoughts on finding rare examples, and the challenges of creating scalable and robust systems. Show notes (transcript and links): http://wandb.me/gd-drago-anguelov --- ⏳ Timestamps: 0:00 Intro 0:45 The story behind the Inception architecture 13:51 Trends and milestones in autonomous vehicles 23:52 The challenges of scalability and simulation 30:19 Why LiDar and mapping are useful 35:31 Waymo Via and autonomous trucking 37:31 Robustness and unsupervised domain adaptation 40:44 Why Waymo released the Waymo Open Dataset 49:02 The domain gap between simulation and the real world 56:40 Finding rare examples 1:04:34 The challenges of production requirements 1:08:36 Outro --- Connect with Drago & Waymo 📍 Drago on LinkedIn: https://www.linkedin.com/in/dragomiranguelov/ 📍 Waymo on Twitter: https://twitter.com/waymo/ 📍 Careers at Waymo: https://waymo.com/careers/ --- Links: 📍 Inception v1: https://arxiv.org/abs/1409.4842 📍 "SPG: Unsupervised Domain Adaptation for 3D Object Detection via Semantic Point Generation", Qiangeng Xu et al. (2021), https://arxiv.org/abs/2108.06709 📍 "GradTail: Learning Long-Tailed Data Using Gradient-based Sample Weighting", Zhao Chen et al. (2022), https://arxiv.org/abs/2201.05938 --- 💬 Host: Lukas Biewald 📹 Producers: Cayla Sharp, Angelica Pan, Lavanya Shukla --- Subscribe and listen to our podcast today! 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
01:09:00
James Cham — Investing in the Intersection of Business and Technology
Episodio en Gradient Dissent - Weights & Biases
James Cham is a co-founder and partner at Bloomberg Beta, an early-stage venture firm that invests in machine learning and the future of work, the intersection between business and technology. James explains how his approach to investing in AI has developed over the last decade, which signals of success he looks for in the ever-adapting world of venture startups (tip: look for the "gradient of iration"), and why it's so important to demystify ML for executives and decision-makers. Lukas and James also discuss how new technologies create new business models, and what the ethical considerations of a world where machine learning is accepted to be possibly fallible would be like. Show notes (transcript and links): http://wandb.me/gd-james-cham --- ⏳ Timestamps: 0:00 Intro 0:46 How investment in AI has changed and developed 7:08 Creating the first MI landscape infographics 10:30 The impact of ML on organizations and management 17:40 Demystifying ML for executives 21:40 Why signals of successful startups change over time 27:07 ML and the emergence of new business models 37:58 New technology vs new consumer goods 39:50 What James considers when investing 44:19 Ethical considerations of accepting that ML models are fallible 50:30 Reflecting on past investment decisions 52:56 Thoughts on consciousness and Theseus' paradox 59:08 Why it's important to increase general ML literacy 1:03:09 Outro 1:03:30 Bonus: How James' faith informs his thoughts on ML --- Connect with James: 📍 Twitter: https://twitter.com/jamescham 📍 Bloomberg Beta: https://github.com/Bloomberg-Beta/Manual --- Links: 📍 "Street-Level Algorithms: A Theory at the Gaps Between Policy and Decisions" by Ali Alkhatib and Michael Bernstein (2019): https://doi.org/10.1145/3290605.3300760 --- 💬 Host: Lukas Biewald 📹 Producers: Cayla Sharp, Angelica Pan, Lavanya Shukla --- Subscribe and listen to our podcast today! 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
01:06:11
Boris Dayma — The Story Behind DALL·E mini, the Viral Phenomenon
Episodio en Gradient Dissent - Weights & Biases
Check out this report by Boris about DALL-E mini: https://wandb.ai/dalle-mini/dalle-mini/reports/DALL-E-mini-Generate-images-from-any-text-prompt--VmlldzoyMDE4NDAy https://wandb.ai/_scott/wandb_example/reports/Collaboration-in-ML-made-easy-with-W-B-Teams--VmlldzoxMjcwMDU5 https://twitter.com/weirddalle Connect with Boris: 📍 Twitter: https://twitter.com/borisdayma --- 💬 Host: Lukas Biewald 📹 Producers: Cayla Sharp, Angelica Pan, Sanyam Bhutani, Lavanya Shukla --- Subscribe and listen to our podcast today! 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
35:58
Tristan Handy — The Work Behind the Data Work
Episodio en Gradient Dissent - Weights & Biases
Tristan Handy is CEO and founder of dbt Labs. dbt (data build tool) simplifies the data transformation workflow and helps organizations make better decisions. Lukas and Tristan dive into the history of the modern data stack and the subsequent challenges that dbt was created to address; communities of identity and product-led growth; and thoughts on why SQL has survived and thrived for so long. Tristan also shares his hopes for the future of BI tools and the data stack. Show notes (transcript and links): http://wandb.me/gd-tristan-handy --- ⏳ Timestamps: 0:00 Intro 0:40 How dbt makes data transformation easier 4:52 dbt and avoiding bad data habits 14:23 Agreeing on organizational ground truths 19:04 Staying current while running a company 22:15 The origin story of dbt 26:08 Why dbt is conceptually simple but hard to execute 34:47 The dbt community and the bottom-up mindset 41:50 The future of data and operations 47:41 dbt and machine learning 49:17 Why SQL is so ubiquitous 55:20 Bridging the gap between the ML and data worlds 1:00:22 Outro --- Connect with Tristan: 📍 Twitter: https://twitter.com/jthandy 📍 The Analytics Engineering Roundup: https://roundup.getdbt.com/ --- 💬 Host: Lukas Biewald 📹 Producers: Cayla Sharp, Angelica Pan, Sanyam Bhutani, Lavanya Shukla --- Subscribe and listen to our podcast today! 👉 Apple Podcasts: http://wandb.me/apple-podcasts 👉 Google Podcasts: http://wandb.me/google-podcasts 👉 Spotify: http://wandb.me/spotify
01:00:47
También te puede gustar Ver más
Practical AI Making artificial intelligence practical, productive & accessible to everyone. Practical AI is a show in which technology professionals, business people, students, enthusiasts, and expert guests engage in lively discussions about Artificial Intelligence and related topics (Machine Learning, Deep Learning, Neural Networks, GANs, MLOps, AIOps, LLMs & more). The focus is on productive implementations and real-world scenarios that are accessible to everyone. If you want to keep up with the latest advances in AI, while keeping one foot in the real world, then this is the show for you! Actualizado
The AI Podcast Explore how the latest technologies are shaping our world, from groundbreaking discoveries to transformative sustainability efforts. The NVIDIA AI Podcast shines a light on the stories and solutions behind the most innovative changes, helping to inspire and educate listeners. Every week, we’ll bring you another tale, another 30-minute interview, as we build a real-time oral history of AI that’s already garnered nearly 6.5 million listens and been acclaimed as one of the best AI and machine learning podcasts. Listen in and get inspired. More information: https://ai-podcast.nvidia.com/ Actualizado
This Week in Machine Learning & AI Podcast Machine learning and artificial intelligence are dramatically changing the way businesses operate and people live. The TWIML AI Podcast brings the top minds and ideas from the world of ML and AI to a broad and influential community of ML/AI researchers, data scientists, engineers and tech-savvy business and IT leaders. Hosted by Sam Charrington, a sought after industry analyst, speaker, commentator and thought leader. Technologies covered include machine learning, artificial intelligence, deep learning, natural language processing, neural networks, analytics, computer science, data science and more. Actualizado