Trending Misterio
iVoox
Descargar app Subir
iVoox Podcast & radio
Descargar app gratis
The New Stack Makers
The New Stack Makers
Podcast

The New Stack Makers 4x5472

1.172
35

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software. For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack 4u332z

The New Stack Podcast is all about the developers, software engineers and operations people who build at-scale architectures that change the way we develop and deploy software.

For more content from The New Stack, subscribe on YouTube at: https://www.youtube.com/c/TheNewStack

1.172
35
The AI Code Generation Problem Nobody's Talking About
The AI Code Generation Problem Nobody's Talking About
In this episode ofThe New Stack Makers, Nitric CEO Steve Demchuk discusses how the frustration of building frontend apps within rigid FinTech environments led to the creation of the Nitric framework — a tool designed to eliminate the friction between developers and cloud infrastructure. Unlike traditional Infrastructure as Code (IaC), where developers must manage both app logic and infrastructure definitions separately, Nitric introduces “Infrastructure from Code.” This approach allows developers to focus solely on application logic while the platform infers and automates infrastructure needs using SDKs and CLI tools across multiple languages and cloud providers. Demchuk emphasizes that Nitric doesn't remove platform team control but enforces it consistently. Guardrails defined by platform teams guide infrastructure provisioning, ensuring security and compliance — even as developers use AI tools to rapidly generate code. The result is a streamlined workflow where developers move faster, AI enhances productivity, and platform teams retain oversight. This episode offers engineering leaders insight into a paradigm shift in how cloud infrastructure is managed in the AI era. Learn more from The New Stack about the latest insights about Nitric:   Building a Serverless Meme Generator With Nitric and OpenAI Why Most Companies Are Struggling With Infrastructure as Code   our community of newsletter subscribers to stay on top of the news and at the top of your game. 
Internet y tecnología 3 días
0
0
7
19:28
The New Bottleneck: AI That Codes Faster Than Humans Can Review
The New Bottleneck: AI That Codes Faster Than Humans Can Review
CodeRabbit, led by founder Harjot Gill, is tackling one of software development's biggest bottlenecks: the human code review process. While AI coding tools like GitHub Copilot have sped up code generation, they’ve inadvertently slowed down shipping due to increased complexity in code reviews. Developers now often review AI-generated code they didn’t write, leading to misunderstandings, bugs, and security risks. In an episode of The New Stack Makers, Gill discusses how Code Rabbit leverages advanced reasoning models—OpenAI’s o1, o3 mini, and Anthropic’s Claude series—to automate and enhance code reviews.  Unlike rigid, rule-based static analysis tools, Code Rabbit builds rich context at scale by spinning up sandbox environments for pull requests and allowing AI agents to navigate codebases like human reviewers. These agents can run CLI commands, analyze syntax trees, and pull in external context from Jira or vulnerability databases. Gill envisions a hybrid future where AI handles the grunt work of code review, empowering humans to focus on architecture and intent—ultimately reducing bugs, delays, and development costs. Learn more from The New Stack about the latest insights about AI code reviews:  CodeRabbit's AI Code Reviews Now Live Free in VS Code, Cursor  AI Coding Agents Level Up from Helpers to Team Players  Augment Code: An AI Coding Tool for 'Real' Development Work our community of newsletter subscribers to stay on top of the news and at the top of your game. 
Internet y tecnología 5 días
0
0
6
20:17
Google Cloud Next Wrap-Up
Google Cloud Next Wrap-Up
At the close of this year’s Google Cloud Next, The New Stack’s Alex Williams, AI editor Frederic Lardinois, and analyst Janakiram MSV discussed the event’s dominant theme: AI agents. The conversation focused heavily on agent frameworks, noting a shift from last year's third-party tools like Langchain, CrewAI, and Microsoft’s Autogen, to first-party offerings from model providers themselves. Google’s newly announced Agent Development Kit (ADK) highlights this trend, following closely on the heels of OpenAI’s agent SDK. MSV emphasized the significance of this shift, calling it a major milestone as Google s the race alongside Microsoft and OpenAI.  Despite the buzz, Lardinois pointed out that many companies are still exploring how AI agents can fit into real-world workflows. The also highlighted how Google now delivers a full-stack AI development experience — from models to deployment platforms like Vertex AI. New enterprise tools like Agent Space and Agent Garden further signal Google’s commitment to making agents a core part of modern software development.  Learn more from The New Stack about the latest in AI agents:  How AI Agents Will Change the Web for s and Developers  AI Agents: A Comprehensive Introduction for Developers  AI Agents Are Coming for Your SaaS Stack  our community of newsletter subscribers to stay on top of the news and at the top of your game. 
Internet y tecnología 1 semana
0
0
5
18:22
Agentic AI and A2A in 2025: From Prompts to Processes
Agentic AI and A2A in 2025: From Prompts to Processes
Agentic AI represents the next phase beyond generative AI, promising systems that not only generate content but also take autonomous actions within business processes. In a conversation recorded at Google Cloud Next, Kevin Laughridge of Deloitte explains that businesses are moving from AI pilots to production-scale deployments. Agentic AI enables decision-making, reasoning, and action across complex enterprise environments, reducing the need for constant human input.  A key enabler is Google’s newly announced open Agent2Agent (A2A) protocol, which allows AI agents from different vendors to communicate and collaborate securely across platforms. Over 50 companies, including PayPal, Salesforce, and Atlassian, are already adopting it. However, deploying agentic AI at scale requires more than individual tools—it demands an AI platform with runtime frameworks, UIs, and connectors. These platforms allow enterprises to integrate agents across clouds and systems, paving the way for AI that is collaborative, adaptive, and embedded in core operations. As AI becomes foundational, developers are transitioning from coding to architecting dynamic, learning systems. Learn more from The New Stack about the latest insights about Agent2Agent Protocol:  Google’s Agent2Agent Protocol Helps AI Agents Talk to Each Other A2A, M, Kafka and Flink: The New Stack for AI Agents our community of newsletter subscribers to stay on top of the news and at the top of your game.
Internet y tecnología 1 semana
0
0
5
19:18
Your AI Coding Buddy Is Always Available at 2 a.m.
Your AI Coding Buddy Is Always Available at 2 a.m.
Aja Hammerly, director of developer relations at Google, sees AI as the always-available coding partner developers have long wished for—especially in those late-night bursts of inspiration. In a conversation with Alex Williams at Google Cloud Next, she described AI-assisted coding as akin to having a virtual pair programmer who can fill in gaps and offer real-time .  Hammerly urges developers to start their AI journey with tools that assist in code writing and explanation before moving into more complex AI agents. She distinguishes two types of DevEx AI: using AI to build apps and using it to eliminate developer toil. For Hammerly, this includes letting AI handle frontend work while she focuses on backend logic. The newly launched Firebase Studio exemplifies this dual approach, offering an AI-enhanced IDE with flexible tools like prototyping, code completion, and automation. Her advice? Developers should explore how AI fits into their unique workflow—because development, at its core, is deeply personal and individual. Learn more from The New Stack about the latest AI insights with Google Cloud: Google AI Coding Tool Now Free, With 90x Copilot’s Output Gemini 2.5 Pro: Google’s Coding Genius Gets an Upgrade Q&A: How Google Itself Uses Its Gemini Large Language Model our community of newsletter subscribers to stay on top of the news and at the top of your game. 
Internet y tecnología 2 semanas
0
0
5
20:43
Google AI Infrastructure PM On New TPUs, Liquid Cooling and More
Google AI Infrastructure PM On New TPUs, Liquid Cooling and More
At Google Cloud Next '25, the company introduced Ironwood, its most advanced custom Tensor Processing Unit (TPU) to date. With 9,216 chips per pod delivering 42.5 exaflops of compute power, Ironwood doubles the performance per watt compared to its predecessor. Senior product manager Chelsie Czop explained that deg TPUs involves balancing power, thermal constraints, and interconnectivity.  Google's long-term investment in liquid cooling, now in its fourth generation, plays a key role in managing the heat generated by these powerful chips. Czop highlighted the incremental design improvements made visible through changes in the data center setup, such as liquid cooling pipe placements. Customers often ask whether to use TPUs or GPUs, but the answer depends on their specific workloads and infrastructure. Some, like Moloco, have seen a 10x performance boost by moving directly from Us to TPUs. However, many still use both TPUs and GPUs. As models evolve faster than hardware, Google relies on collaborations with teams like DeepMind to anticipate future needs. Learn more from The New Stack about the latest AI infrastructure insights from Google Cloud: Google Cloud Therapist on Bringing AI to Cloud Native Infrastructure A2A, M, Kafka and Flink: The New Stack for AI Agents our community of newsletter subscribers to stay on top of the news and at the top of your game. 
Internet y tecnología 2 semanas
0
0
5
19:38
Google Cloud Therapist on Bringing AI to Cloud Native Infrastructure
Google Cloud Therapist on Bringing AI to Cloud Native Infrastructure
At Google Cloud Next, Bobby Allen, Group Product Manager for Google Kubernetes Engine (GKE), emphasized GKE’s foundational role in ing AI platforms. While AI dominates current tech conversations, Allen highlighted that cloud-native infrastructure like Kubernetes is what enables AI workloads to function efficiently. GKE powers key Google services like Vertex AI and is trusted by organizations including DeepMind, gaming companies, and healthcare providers for AI model training and inference.  Allen explained that GKE offers scalability, elasticity, and for AI-specific hardware like GPUs and TPUs, making it ideal for modern workloads. He noted that Kubernetes was built with capabilities—like high availability and secure orchestration—that are now essential for AI deployment. Looking forward, GKE aims to evolve into a model router, allowing developers to access the right AI model based on function, not vendor, streamlining the development experience. Allen described GKE as offering maximum control with minimal technical debt, future-proofed by Google’s continued investment in open source and scalable architecture. Learn more from The New Stack about the latest insights with Google Cloud:  Google Kubernetes Engine Customized for Faster AI Work KubeCon Europe: How Google Will Evolve Kubernetes in the AI Era Apache Ray Finds a Home on the Google Kubernetes Engine our community of newsletter subscribers to stay on top of the news and at the top of your game. 
Internet y tecnología 3 semanas
0
0
5
24:04
VMware's Kubernetes Evolution: Quashing Complexity
VMware's Kubernetes Evolution: Quashing Complexity
Without this, developers waste time managing infrastructure instead of focusing on code. VMware addresses this with VCF, a pre-integrated Kubernetes solution that includes components like Harbor, Valero, and Istio, all managed by VMware. While some worry about added complexity from abstraction, Turner dismissed concerns about virtualization overhead, pointing to benchmarks showing 98.3% of bare metal performance for virtualized AI workloads. He emphasized that AI is driving nearly half of Kubernetes deployments, prompting VMware’s partnership with Nvidia to GPU virtualization.  Turner also highlighted VMware's open source leadership, contributing to major projects and ensuring Kubernetes remains cloud-independent and standards-based. VMware aims to simplify Kubernetes and AI workload management while staying committed to the open ecosystem. Learn more from The New Stack about the latest insights with VMware  Has VMware Finally Caught Up With Kubernetes? VMware’s Golden Path our community of newsletter subscribers to stay on top of the news and at the top of your game.   
Internet y tecnología 3 semanas
0
0
5
30:40
Prequel: Software Errors Be Gone
Prequel: Software Errors Be Gone
Prequel is launching a new developer-focused service aimed at democratizing software error detection—an area typically dominated by large cloud providers. Co-founded by Lyndon Brown and Tony Meehan, both former NSA engineers, Prequel introduces a community-driven observability approach centered on Common Reliability Enumerations (CREs). CREs categorize recurring production issues, helping engineers detect, understand, and communicate problems without reinventing solutions or working in isolation. Their open-source tools, cre and prereq, allow teams to build and share detectors that catch bugs and anti-patterns in real time—without exposing sensitive data, thanks to edge processing using WebAssembly. The urgency behind Prequel’s mission stems from the rapid pace of AI-driven development, increased third-party code usage, and rising infrastructure costs. Traditional observability tools may surface symptoms, but Prequel aims to provide precise problem definitions and actionable insights. While observability giants like Datadog and Splunk dominate the market, Brown and Meehan argue that engineers still feel overwhelmed by data and underpowered in diagnostics—something they believe CREs can finally change. Learn more from The New Stack about the latest Observability insights  Why Consolidating Observability Tools Is a Smart Move Building an Observability Culture: Getting Everyone Onboard  our community of newsletter subscribers to stay on top of the news and at the top of your game. 
Internet y tecnología 3 semanas
0
0
7
05:13
Arm’s Open Source Leader on Meeting the AI Challenge
Arm’s Open Source Leader on Meeting the AI Challenge
At Arm, open source is the default approach, with proprietary software requiring justification, says Andrew Wafaa, fellow and senior director of software communities. Speaking at KubeCon + CloudNativeCon Europe, Wafaa emphasized Arm’s decade-long commitment to open source, highlighting its investment in key projects like the Linux kernel, GCC, and LLVM. This investment is strategic, ensuring strong for Arm’s architecture through vital tools and system software. Wafaa also challenged the hype around GPUs in AI, asserting that Us—especially those enhanced with Arm’s Scalable Matrix Extension (SME2) and Scalable Vector Extension (SVE2)—are often more suitable for inference workloads. Us offer greater flexibility, and Arm’s innovations aim to reduce dependency on expensive GPU fleets. On the AI framework front, Wafaa pointed to PyTorch as the emerging hub, likening its ecosystem-building potential to Kubernetes. As a PyTorch Foundation board member, he sees PyTorch becoming the central open source platform in AI development, with broad community and industry backing. Learn more from The New Stack about the latest insights about Arm:  Edge Wars Heat Up as Arm Aims to Outflank Intel, Qualcomm  Arm: See a Demo About Migrating a x86-Based App to ARM64 our community of newsletter subscribers to stay on top of the news and at the top of your game. 
Internet y tecnología 1 mes
0
0
6
18:21
Why Kubernetes Cost Optimization Keeps Failing
Why Kubernetes Cost Optimization Keeps Failing
In today’s uncertain economy, businesses are tightening costs, including for Kubernetes (K8s) operations, which are notoriously difficult to optimize. Yodar Shafrir, co-founder and CEO of ScaleOps, explained at KubeCon + CloudNativeCon Europe that dynamic, cloud-native applications have constantly shifting loads, making resource allocation complex.  Engineers must provision enough resources to handle spikes without overspending, but in large production clusters with thousands of applications, manual optimization often fails. This leads to 70–80% resource waste and performance issues. Developers typically prioritize application performance over operational cost, and AI workloads further strain resources. Existing optimization tools offer static recommendations that quickly become outdated due to the dynamic nature of workloads, risking downtime.  Shafrir emphasized that real-time, fully automated solutions like ScaleOps' platform are crucial. By dynamically adjusting container-level resources based on real-time consumption and business metrics, ScaleOps improves application reliability and eliminates waste. Their approach shifts Kubernetes management from static to dynamic resource allocation. Listen to the full episode for more insights and ScaleOps' roap. Learn more from The New Stack about the latest in scaling Kubernetes and managing operational costs:  ScaleOps Adds Predictive Horizontal Scaling, Smart Placement  ScaleOps Dynamically Right-Sizes Containers at Runtime  our community of newsletter subscribers to stay on top of the news and at the top of your game. 
Internet y tecnología 1 mes
0
0
5
17:22
How Heroku Is ‘Re-Platforming’ Its Platform
How Heroku Is ‘Re-Platforming’ Its Platform
Heroku has been undergoing a major transformation, re-platforming its entire Platform as a Service (PaaS) offering over the past year and a half. This ambitious effort, dubbed “Fir,” will soon reach general availability. According to Betty Junod, CMO and SVP at Heroku (owned by Salesforce), the overhaul includes a shift to Kubernetes and OCI standards, reinforcing Heroku’s commitment to open source.  The platform now features Heroku Cloud Native Buildpacks, which let developers create container images without Dockerfiles. Originally built on Ruby on Rails and predating Docker and AWS, Heroku now s eight programming languages. The company has also deepened its open source engagement by becoming a platinum member of the Cloud Native Computing Foundation (CNCF), contributing to projects like OpenTelemetry. Additionally, Heroku has open sourced its Twelve-Factor Apps methodology, inviting the community to help modernize it to address evolving needs such as secrets management and workload identity. This signals a broader effort to align Heroku’s future with the cloud native ecosystem.  Learn more from The New Stack about Heroku's approach to Platform-as-a-Service: Return to PaaS: Building the Platform of Our Dreams Heroku Moved Twelve-Factor Apps to Open Source. What’s Next? How Heroku Is Positioned To Help Ops Engineers in the GenAI Era our community of newsletter subscribers to stay on top of the news and at the top of your game.
Internet y tecnología 1 mes
0
0
7
18:01
Container Security and AI: A Talk with Chainguard's Founder
Container Security and AI: A Talk with Chainguard's Founder
In this episode of The New Stack Makers, recorded at KubeCon + CloudNativeCon Europe, Alex Williams speaks with Ville Aikas, Chainguard founder and early Kubernetes contributor. They reflect on the evolution of container security, particularly how early assumptions—like trusting that s would validate container images—proved problematic. Aikas recalls the lack of secure defaults, such as allowing containers to run as root, stemming from the team’s internal Google perspective, which led to unrealistic expectations about external security practices. The Kubernetes community has since made strides with governance policies, secure defaults, and standard practices like avoiding long-lived credentials and ing federated authentication. Aikas founded Chainguard to address the need for trusted, minimal, and verifiable container images—offering zero-CVE images, transparent toolchains, and full SBOMs. This security-first philosophy now extends to virtual machines and Java dependencies via Chainguard Libraries. The discussion also highlights the rising concerns around AI/ML security in Kubernetes, including complex model dependencies, GPU integrations, and potential attack vectors—prompting Chainguard’s move toward locked-down AI images. Learn more from The New Stack about Container Security and AI Chainguard Takes Aim At Vulnerable Java Libraries Clean Container Images: A Supply Chain Security Revolution Revolutionizing Offensive Security: A New Era With Agentic AI   our community of newsletter subscribers to stay on top of the news and at the top of your game. 
Internet y tecnología 1 mes
0
0
6
20:51
Kelsey Hightower, AWS's Eswar Bala on Open Source's Evolution
Kelsey Hightower, AWS's Eswar Bala on Open Source's Evolution
In a candid episode of The New Stack Makers, Kubernetes pioneer Kelsey Hightower and AWS’s Eswar Bala explored the evolving relationship between enterprise cloud providers and open source software at KubeCon+CloudNativeCon London. Hightower highlighted open source's origins as a grassroots movement challenging big vendors, and shared how it gave people—especially those without traditional tech credentials—a way into the industry. Recalling his own journey, Hightower emphasized that open source empowered individuals through contribution over credentials. Bala traced the early development of Kubernetes and his own transition from building container orchestration systems to launching AWS’s Elastic Kubernetes Service (EKS), driven by growing customer demand. The discussion, recorded at KubeCon + CloudNativeCon Europe, touched on how open source is now central to enterprise cloud strategies, with AWS not only contributing but creating projects like Karpenter, Cedar, and Kro. Both speakers agreed that open source's collaborative model—where companies build in public and customers drive innovation—has reshaped the cloud ecosystem, turning former tensions into partnerships built on community-driven progress. Learn more from The New Stack about the relationship between enterprise cloud providers and open source software: The Metamorphosis of Open Source: An Industry in Transition The Complex Relationship Between Cloud Providers and Open Source How Open Source Has Turned the Tables on Enterprise Software our community of newsletter subscribers to stay on top of the news and at the top of your game. 
Internet y tecnología 1 mes
0
0
6
37:52
The Kro Project: Giving Kubernetes s What They Want
The Kro Project: Giving Kubernetes s What They Want
In a rare show of collaboration, Google, Amazon, and Microsoft have ed forces on Kro — the Kubernetes Resource Orchestrator — an open source, cloud-agnostic tool designed to simplify custom resource orchestration in Kubernetes. Announced during KubeCon + CloudNativeCon Europe, Kro was born from strong customer demand for a Kubernetes-native solution that works across cloud providers without vendor lock-in. Nic Slattery, Product Manager at Google and Jesse Butler, Principal Product Manager, AWS shared with The New Stack that unlike many enterprise products, Kro didn’t stem from top-down strategy but from consistent customer "pull" experienced by all three companies. It aims to reduce complexity by allowing platform teams to offer simplified interfaces to developers, enabling resource requests without needing deep service-specific knowledge. Kro also represents a unique cross-company collaboration, driven by a shared mission and open source values. Though still in its alpha stage, the project has already attracted 57 contributors in just seven months. The team is now focused on refining core features and preparing for a production-ready release — all while maintaining a narrowly scoped, community-first approach. Learn more from The New Stack about KRO: One Mighty kro; One Giant Leap for Kubernetes Resource Orchestration Kubernetes Gets a New Resource Orchestrator in the Form of Kro Orchestrate Cloud Native Workloads With Kro and Kubernetes our community of newsletter subscribers to stay on top of the news and at the top of your game. 
Internet y tecnología 1 mes
0
0
5
21:51
OpenSearch: What’s Next for the Search and Analytics Suite?
OpenSearch: What’s Next for the Search and Analytics Suite?
OpenSearch has evolved significantly since its 2021 launch, recently reaching a major milestone with its move to the Linux Foundation. This shift from company-led to foundation-based governance has accelerated community contributions and enterprise adoption, as discussed by NetApp’s Amanda Katona in a New Stack Makers episode recorded at KubeCon + CloudNativeCon Europe. NetApp, an early adopter of OpenSearch following Elasticsearch’s licensing change, now offers managed services on the platform and contributes actively to its development. Katona emphasized how neutral governance under the Linux Foundation has lowered barriers to enterprise contribution, noting a 56% increase in s since the transition and growing interest from developers. OpenSearch 3.0, featuring a Lucene 10 upgrade, promises faster search capabilities—especially relevant as data volumes surge. NetApp’s ongoing investments include work on machine learning plugins and developer training resources. Katona sees the Linux Foundation’s involvement as key to OpenSearch’s long-term success, offering vendor-neutral governance and reassuring s seeking openness, performance, and scalability in data search and analytics. Learn more from The New Stack about OpenSearch:  Report: OpenSearch Bests ElasticSearch at Vector Modeling AWS Transfers OpenSearch to the Linux Foundation  OpenSearch: How the Project Went From Fork to Foundation our community of newsletter subscribers to stay on top of the news and at the top of your game. 
Internet y tecnología 1 mes
0
0
7
20:10
Kong’s AI Gateway Aims to Make Building with AI Easier
Kong’s AI Gateway Aims to Make Building with AI Easier
AI applications are evolving beyond chatbots into more complex and transformative solutions, according to Marco Palladino, CTO and co-founder of Kong. In a recent episode of The New Stack Makers, he discussed the rise of AI agents, which act as "virtual employees" to enhance organizational efficiency. For instance, AI can now function as a product manager for APIs—analyzing documentation, detecting inaccuracies, and making corrections. However, reliance on AI agents brings security risks, such as data leakage and governance challenges. Organizations need observability and safeguards, but developers often resist implementing these requirements manually. As GenAI adoption matures, teams seek ways to accelerate development without rebuilding security measures repeatedly. To address these challenges, Kong introduced AI Gateway, an open-source plugin for its API Gateway. AI Gateway s multiple AI models across providers like AWS, Microsoft, and Google, offering developers a universal API to integrate AI securely and efficiently. It also features automated retrieval-augmented generation (RAG) pipelines to minimize hallucinations. Palladino emphasized the need for consistent security in AI infrastructure, ensuring developers can focus on innovation while leveraging built-in protections. Learn more from The New Stack about Kong’s AI Gateway Kong: New ‘AI-Infused’ Features for API Management, Dev Tools From Zero to a Terraform Provider for Kong in 120 Hours our community of newsletter subscribers to stay on top of the news and at the top of your game.  
Internet y tecnología 1 mes
0
0
7
21:05
What’s the Future of Platform Engineering?
What’s the Future of Platform Engineering?
Platform engineering was meant to ease the burdens of Devs and Ops by reducing cognitive load and repetitive tasks. However, building internal development platforms (IDPs) has proven challenging. Despite this, Gartner predicts that by 2026, 80% of software engineering organizations will have a platform team. In a recent New Stack Makers episode, Mallory Haigh of Humanitec and Nathen Harvey of Google discussed the current state and future of platform engineering. Haigh emphasized that many organizations rush to build IDPs without understanding why they need them, leading to ineffective implementations. She noted that platform engineering is 10% technical and 90% cultural change, requiring deep introspection and strategic planning. AI-driven automation, particularly agentic AI, is expected to shape platform engineering’s future. Haigh highlighted how AI can enhance platform orchestration and optimize GPU resource management. Harvey compared platform engineering to generative AI—both aim to reduce toil and improve efficiency. As AI adoption grows, platform teams must ensure their infrastructure s these advancements. Learn more from The New Stack about platform engineering:   Platform Engineering on the Brink: Breakthrough or Bust? Platform Engineers Must Have Strong Opinions The Missing Piece in Platform Engineering: Recognizing Producers our community of newsletter subscribers to stay on top of the news and at the top of your game.  
Internet y tecnología 2 meses
0
0
7
26:44
Ai agents are dumb robots, calling llms
Ai agents are dumb robots, calling llms
AI agents are set to transform software development, but software itself isn’t going anywhere—despite the dramatic predictions. On this episode of The New Stack Makers, Mark Hinkle, CEO and Founder of Peripety Labs, discusses how AI agents relate to serverless technologies, infrastructure-as-code (IaC), and configuration management.  Hinkle envisions AI agents as “dumb robots” handling tasks like querying APIs and exchanging data, while the real intelligence remains in large language models (LLMs). These agents, likely implemented as serverless functions in Python or JavaScript, will automate software development processes dynamically. LLMs, leveraging vast amounts of open-source code, will enable AI agents to generate bespoke, task-specific tools on the fly—unlike traditional cloud tools from HashiCorp or configuration management tools like Chef and Puppet.  As AI-generated tooling becomes more prevalent, managing and optimizing these agents will require strong observability and evaluation practices. According to Hinkle, this shift marks the future of software, where AI agents dynamically create, call, and manage tools for CI/CD, monitoring, and beyond. Check out the full episode for more insights.  Learn more from The New Stack about emerging trends in AI agents:  Lessons From Kubernetes and the Cloud Should Steer the AI Revolution AI Agents: Why Workflows Are the LLM Use Case to Watch  our community of newsletter subscribers to stay on top of the news and at the top of your game. 
Internet y tecnología 2 meses
0
0
5
28:31
Goodbye SaaS, Hello AI Agents
Goodbye SaaS, Hello AI Agents
The transition from SaaS to Services as Software with AI agents is underway, necessitating new orchestration methods similar to Kubernetes for containers. AI agents will require resource allocation, workflow management, and scalable infrastructure as they evolve. Two key trends are driving this shift:  Data Evolution – From spreadsheets to AI agents, data has progressed through relational databases, big data, predictive analytics, and generative AI.  Computing Evolution – Starting from mainframes, the journey has moved through desktops, client servers, web/mobile, SaaS, and now agentic workflows.  Janakiram MSV, an analyst, notes on this episode of The New Stack Makers that SaaS depends on data—without it, platforms like Salesforce and SAP lack value. As data becomes more actionable and compute more agentic, a new paradigm emerges: Services as Software. AI agents will automate tasks previously requiring human intervention, like emails and sales follow-ups. However, orchestrating them will be complex, akin to Kubernetes managing containers. Unlike deterministic containers, AI agents depend on dynamic, trained data, posing new enterprise challenges in memory management and infrastructure.  Learn more from The New Stack about evolution to AI agents:  How AI Agents Are Starting To Automate the Enterprise  Can You Trust AI To Be Your Data Analyst?  Agentic AI is the New Web App, and Your AI Strategy Must Evolve  our community of newsletter subscribers to stay on top of the news and at the top of your game.   
Internet y tecnología 2 meses
0
0
5
30:02
También te puede gustar Ver más
Cómo conocí a nuestro cloud
Cómo conocí a nuestro cloud Somos Goodly, el equipo de desarrollo especializado en Google Cloud Platform de Paradigma Digital. En nuestros podcasts compartimos de forma clara y transparente nuestras experiencias y opiniones sobre la nube de Google. Además descubrimos sus últimos lanzamientos y qué posibilidades te ofrecen para desarrollar tu producto digital. Todo ello desde la visión de los más de 10 años construyendo aplicaciones para grandes empresas. Envíanos tus preguntas a [email protected] Actualizado
Software Engineering Radio - the podcast for profe
Software Engineering Radio - the podcast for profe Software Engineering Radio is a podcast targeted at the professional software developer. The goal is to be a lasting educational resource, not a newscast. Every 10 days, a new episode is published that covers all topics software engineering. Episodes are either tutorials on a specific topic, or an interview with a well-known character from the software engineering world. All SE Radio episodes are original content — we do not record conferences or talks given in other venues. Each episode comprises two speakers to ensure a lively listening experience. SE Radio is an independent and non-commercial organization. All content is licensed under the Creative Commons 2.5 license. Actualizado
Ir a Internet y tecnología