As I learn more about proper AI and dig down into its components, the more the old fundamentals seem to hold firm. One of the things that has become increasingly clear is how you should apply security and access controls when working with AI systems. A common mistake is giving an AI access to data it should never see.
AI interacts with the world through what are called tools. These are simply a new posh term for abstraction layers. A tool might be a snippet of SQL, access to a dataset, a link to an external service, or almost anything else the AI can use to answer your query. The AI is designed to choose the most appropriate tool, but the security boundaries must be built into the tools themselves.
For example, if you have a tool that queries a SQL server, the data that the AI agent can access should be restricted at the tool level, not at the AI agent level itself. You would not rely on the AI to write a query that says, get all the data about hamsters but ignore the data about rats. If the two datasets have different security requirements, the tool should only have access to the hamster data in the first place.
AI actions are not hard and fast. They are probabilistic rather than deterministic, which means you cannot rely on a large language model to consistently make the correct security decision for you. Traditional security principles still apply, especially least-privilege access and strong boundary control.
In the era of AI, the technology may be new, but the security fundamentals remain exactly the same.
Note: I have not suddenly become an AI expert; I am just elbows deep in the very serious “end-to-end AI engineering” by Swirl AI. I would recommend it to anyone that wants to get past the glossy rubbish about AI.