Tristan's Blog

“That some of us should venture to embark on a synthesis of facts and theories, albeit with second-hand and incomplete knowledge of some of them – and at the risk of making fools of ourselves” (Erwin Schrödinger)

AI (2025)

I haven't written much about AI, just a post lamenting degraded open access and a post complaining about LLM verbosity. I want to share a few perspectives and predictions here, as much to summarize where I'm at on the topic, as to give myself a looking-back point in the future.

Experience

I use LLMs at work on a daily basis. The primary interface is through copilot autocomplete in my Emacs editor. This is a love-hate relationship, when the suggestion frequency is dialed too high, the bad autocompletes generate more cognitive disruption than good autocompletes save cognitive effort and typing time. That said, I've never disabled it.

I also use gptel, typically in dedicated chat buffers, and when I remember using in-buffer rewrites. I usually create ephemeral chat buffers, but have started using file-backed buffers for persisting chats for later recall. I find that gptel has a great balance of usability and power; like my favorite tools it stays out of the way, has simple shortcuts for the common use cases, and leverages transient.el for powerful configuration.

I trialed cursor.ai, and can't wait for suggested-next-edit to come to Emacs. I also briefly demo'd claude code, and plan to invest more time with agentic coding.

Predictions

These are by no means "hot takes" - as I've disconnected from the internet's overwhelming stream of consciousness, I've lost interest in such things! Instead, the predictions are meant to organize some of my thinking about how I use LLMs today and what I expect in the near future.

Degraded internet

We've already seen major internet websites like X and Reddit close down their previously generous API access. At first, I suspected this trend to continue, as platforms hoard their precious data in order to sell to the highest bidder for training data.

With the rise of agents, or "tool-using LLMs", or "function-calling LLMs connected to other software systems", or really just "LLMs generating structured output", I predict major platforms will be forced to step up their protections, extending beyond API access to the basic web and app interfaces that humans use. This is because agents can act on behalf of humans, interfacing with dynamic websites about as well as a human can, but importantly not generating the income from human eyeballs on the omnipresent advertisements.

As an example, imagine an agentic booking a vacation for me, intelligently skipping ads and promoted content. This threatens the foundation of the internet advertising model.

What could the future hold? Maybe humans will be forced to have their cameras on, and eyes tracked, as if taking a remote examination on their computer. That's possible, but unlikely. It's much easier for me to imagine advertising to adapt and evolve, maybe injecting ads into LLM training data, or returning prompt injections when accessed by LLMs.

It's a tragedy. I predict for a brief window we'll see just what technology could do if platforms were open and inter-operable, if personal data were accessible, if my computer and software worked for me and not for others. Agents are a crowbar forcing open the door, but in short time stronger walls will be built.

At the end of the day, it's all about the money. The incentives are in place to hoard data, lock down and gate access, and prevent the usefulness of agents, in order to force humans to interface with monopolistic corporations, so that profit can be made.

Voice first

Generating structured output from unstructured input is an incredible feature of LLMs. I predict this will yield a wave of new user interfaces that are voice-first. I think back on all the frustrating, multi-step, annoyingly custom forms that I submit on a daily basis. What if that could be avoided, or at least alleviated, through a friendly chat-like interface? How nice if I could blabber on for a few minutes and have the data nicely structured for me - no formatting dates or phone numbers incorrectly, for one.

At it's simplest, imagine a browser extension that supports voice input to fill in forms throughout the internet. Given use over time, it should be able to learn/remember my data and auto-complete most forms, asking me only the unique information in the form.

This is one example of a broader possibility that I foresee. There's an incredible diversity of data entry UIs on the internet. Every time I work with a designer on a new form, it invariably includes new and unique form inputs. I think this is a waste of time and cognitive effort for users, who have to contend with yet-another-dropdown-that's-not-like-any-dropdown-I've-seen-before.

LLMs can present a simplified and unified interface to diverse form entries, saving a not-insignificant amount of time for users.

Who knows, maybe this will provide back-pressure against such complex and bespoke interfaces? Or websites will provide LLM back-channel interfaces and integrations specifically designed for the dominant LLM and agent workflows.

Just as above, I suspect this pleasant future will not come easily, and there'll be conflict between LLM automation tools and major providers. User interfaces are subject to the same incentives, and so any attempt to bypass them will be fought tooth and nail. Further, UIs are the frontlines of the battle for user attention, subject to insane levels of control and manipulation.

Agentic coding

As a software engineer, this one hits home most closely. Will I have a job in a few years? If I do, what will it look like?

I'll state that I do expect to continue working as a software engineer for the foreseeable future. I suspect my day to day will change - I got a taste of the future of babysitting agents in a recent podcast.

Agents are great at validating whatever you put into them. That is great, if I am keyed into the right idea or perspective, but if I am not, I am liable to dig in deeper into the wrong approach. For software, where engineers love to write new green-field code, I suspect there will be an explosion of new code that's poorly vetted.

As we know, reading code is significantly harder than writing code, especially when not in your style or a known colleague's style. As agents become code authors and not merely code assistants, the skill of reading code will become even more valuable. As agents then review and release code independently, it'll be the task of the overseeing software engineer to monitor and ensure the systems are running correctly - just like humans, agents have a tendency to skip important requirements!

Maybe this means software engineers working alongside or above a "team" of agents. But to suggest this future also implies an explosion in code changes that need to be vetted, and new software systems that need to be overseen - suggesting a need for at least as many, if not more, engineers.

I'd be remiss not to mention the security problems of LLMs. Agents inherently mix command and data channels, meaning there's no possibility of safely passing in untrusted data. The average sentiment appears to be, as usual for security, "who cares?" and that will remain the case until and well after the major security breaches start. When I run an agent with full autonomy (i.e. all permissions enabled, full system access via tooling), I am extending my trust to every possible source of input into the LLM.

For example, if I instruct my agent to use a particular library to solve a problem, it's easy to conceive the agent searching for relevant documentation, ending up on a page with an embedded prompt injection that steals files, code, or production API keys.

In production, the threat is much higher - I heard a story about a company's production LLM given a tool to execute code.

Beyond security, there's the practical questions of running multiple agents - because they're automated, of course I'd run as many as I can. It's a small prediction, but I foresee that reproducible, isolated environments will thrive. Perhaps we'll also see a resurgence of the "develop in the cloud", and big providers which invested in this area will get their payday if they can execute well.

At work, it takes a couple hours of annoyingly manual work to get set up and running with a local development environment. Imagine if I wanted to spin up a dozen cloud environments with agents hacking away - it would be incredibly challenging, and any updates to said environments would be quite slow.

Instead, suppose that an environment can be spun up in a few minutes or even seconds - I'm talking compiled single binary, sqlite in memory database, a totally isolated and reproducible environment. Then agents could be spun up on-demand and put to work, and die (or, retire?) when they're done. That seems like a compelling advantage.

Conclusion

Wrapping up, I am excited for the future of software, both as a user and as a programmer. My over-arching thesis is that the current economic situation will hold back the full potential of LLMs, as monopolistic companies desperately leverage their market domination to hold onto their stock values. As an engineer, I am excited to move up the abstraction layer, and become more productive with agentic tooling.