Insights
Article

Unlearning linear: The agentic opportunity

Nate Berent-Spillson
/
VP, Engineering
<Read time>
/
April 22, 2025

In “Unlearning linear: The new economics of knowledge work,” we discussed the historical legacy of scaling work using human hours, much like we once measured output in horsepower. But now, AI and automation are breaking our long-held assumptions about productivity, and things are changing so fast that it’s challenging our ability to adapt.

Today in Part 2, we’re going to focus on the opportunity AI presents to transform our linear work patterns, and the access required to make AI agents truly useful.  

The evolution of AI

Let’s start by reviewing Generative AI’s evolution. We’ve moved from simple search-based interfaces (like Google) to conversational models that enable iterative request-response interactions (like ChatGPT). Now, we’re entering the era of AI agents, which can use tools to gather information, such as web queries or financial analysis, and then act on that information, such as sending a reply, scheduling a meeting, or resolving an issue — to do meaningful work rather than just answer questions or wordsmith a document.

Agents leveraging newer reasoning models can work backwards from an outcome, generate the prompts needed to achieve the outcome and even check their own work. Within this space, different subclasses of agents are emerging, from assistants and delegates to co-workers and reviewers. Sophisticated agents are starting to arrive, but the practical use of them isn’t quite here yet.  

From tasks to work  

Moving from tasks to work requires delegation, and it’s useful to think about how we delegate work to a human versus an AI agent. When I delegate to a human, it typically involves a conversation where I can gauge their understanding through body language and back-and-forth dialogue. I get feedback from a human that says they’re starting to “get it,” whereas when I delegate to an agent, there’s an emotional intelligence gap. It’s also more difficult to ascertain where agents go wrong. With a human, I can ask clarifying questions and get reasoning behind decisions. With an agent, the problem is not always reproducible, and I don’t have a reliable mechanism to track where they veer off course. Agents currently need more guardrails and require a level of micromanagement to course-correct and prevent them from spinning endlessly down a dead-end path. While Large Language Models (LLMs) can be used as a judge, and foundational models are putting more focus on reasoning, they’re still missing what we would describe as common sense.  

You might think this sounds negative about agents, but this is a normal part of the technology adoption curve. We’re learning to work with these tools as they’re simultaneously and rapidly improving. The water line of how far up one can delegate to an agent is increasing, reducing both the cost and effort for offloading work. Unlike humans, agents don’t suffer from the context switching of multitasking because they can continuously split and delegate work to other agents. Humans are terrible at it because mental context switching is jarring to our brains, while agents can achieve true, scalable parallelization of work.  

Many of us are already delegating tasks like research, summarization, and note-taking to AI, with multiple benefits. During a meeting, you can focus more on the conversation while taking your own notes. But when combined with your own notes, the recording, AI-generated transcript, and summary become a powerful resource to search and analyze. We can transcend from ethereal conversation to composable knowledge and information enabled by, and then available to an agentic model.  

Access and trust conundrum

This brings us to an interesting dichotomy. The power of AI comes from patterns and its ability to provide meaningful context. To make that power work, we must give the agents and models access to a lot of information. We’ve advanced from simply following protocol to synthesis, reasoning, decision-making and ultimately action.

And so, we come to the access and trust conundrum. For real delegation to happen, AI agents need unprecedented access to systems and data, the same way a human would if they were doing the job. This includes access to documents, communication channels, and decision-making frameworks. Companies rightfully guard these things with fine-grained permissions, and zero-trust approaches. But just as a human without access to information and empowered to take action would be ineffective in their work, an agent suffers from the same limitations.

Accessing information is one thing, but agents who can write and execute their own python script to call services and APIs are not only possible, but what’s needed to make themselves useful. Humans use a graphical user interface (GUI) to call a back-end service, but the agent can go straight to a service call. If we don’t get to a level of maturity that allows this, we’re artificially constraining what agents can do, thus limiting their impact.

Current access control models need to advance. As agents need unprecedented access to information, and the ability to act, it’s not reasonable or advisable to have agentic service accounts with super-user level access. This is where the concept of fine-grained, on-demand entitlements comes into play. We need the ability to create micro-entitlements that allow agents to read data and take actions within the context of the task they’re working on, and when the task is completed, the micro-entitlement is revoked. This gives us a clean audit trail for access granted, action taken and access revoked.

Striking the balance between access, oversight, autonomy and trust is our next challenge. One potential solution is local model execution to help address security and privacy concerns. Just as we’ve learned to manage human trust in workplaces, we’ll get better at distinguishing reliable AI models and use them safely. As these systems become more capable, norms around access and trust will evolve, powered by, you guessed it, models and agents.  

A new techno-governance model

Leveraging agents at scale means moving to a new techno-governance model. Policy enforcement today relies on human interpretation — PDF documents, SOPs and employee handbooks that people read, apply and synthesize. When deciding if we can do something or take a certain action, we apply the sum of the laws, policies, procedures and rules into ultimately making a judgement call. As humans, this is wired into our behavior. We make thousands of these micro-judgement calls every day in our lives and work. And as humans we understand subtlety and nuance, not just rigorous adherence to protocol.  

Pre-AI, computers were deterministic — garbage-in, garbage-out — that did exactly as they were programmed. With language models though, we’re finding that not only can models do a really great job of understanding a vast amount of policy-based context, but they can also make a judgement call. To achieve that in our techno-governance model, we automate compliance by turning policies into executable code (Policy as Code) rather than static documents. In some cases, we need a strict “rules engine” that enforces policies and business rules, and when needed we can get something with subtlety and finesse. The LLM can act as a pragmatic judge, practically weighing risks, and their likelihood when making a recommendation or reaching a decision.  

This sets us up to have not just Policy as Code, but Policy as an Agent — a system where instead of searching through documents or waiting for human approval, we can ask an Agent whether an action falls within policy, and it can provide not only it’s reasoning, but also references back to the appropriate policies. The real benefit is the ability to do this at scale.  

This approach can also be applied to entitlements and permissions, where policy agents make real-time decisions within well-defined boundaries. Today, granting access to resources requires multiple layers of approval, tickets and manual oversight, which in an agentic world doesn’t scale. Instead, an agent that specializes in granting entitlements could assess requests in context, determine if they align with policy and approve or deny them automatically, and then grant and revoke fine-grained entitlements with speed and scale.  

It always starts and ends with trust  

We have been conditioned our entire lives to consider computers as deterministic. We’re used to telling a computer exactly what we want it to do and having it do exactly that thing. Unfortunately, AI doesn’t work that way. It’s probabilistic and non-deterministic. Distrust of an AI algorithm is a phenomenon called automation bias. Because AI is often opaque, we can’t see the “reasoning” and we’re not comfortable trusting the system. You’ll see this from the skeptics who try to work with AI, looking for it to get something wrong and then holding that up as an example of why AI doesn’t work.  

For the state of technology today, that’s fine, it just means that we’re going to delegate lower risk tasks and keep a human in the loop on actions. Getting comfortable with how these agents work is essential because the opportunity is massive. AI agents aren’t just automating tasks; they’re restructuring how work scales. This has the potential to have a compounding effect, where we super-charge our current models for knowledge and business work with synthetic co-workers. Agents can free us from linear workflows where human effort is the bottleneck. Instead of work expanding endlessly to fit available time, we can start designing systems where outcomes — not hours — are the unit of scale.

Practically speaking, how do we start? That’s what we’ll explore in Part 3.

No items found.
0:00
0:00
Show insight details
Episode hosts and guests
No items found.
Written by 
Nate Berent-Spillson
VP, Engineering
Episode transcript
Sources
No items found.
Let’s talk.

Transform insights into action and ideas into outcomes.