Press "Enter" to skip to content

Principles of AI usage | Notes towards a framework

This post tries to tie a few threads together which eventually evolve into a tentative set of principles for AI usage in modern times. It begins by briefly talking about Yochai Benkler’s The Wealth of Networks, expands into some work with AI around collecting a range of cross-disciplinary principles, and ends by proposing a few principles for living and working with AI.

The post is written while following the principles of AI usage proposed below.


 

‘Wealth of Networks’ and the Internet Age

 

“Human nature is not a machine to be built after a model, and set to do exactly the work prescribed for  it, but a tree, which requires to grow and develop itself on all sides, according to the tendency of the inward forces which make it a living thing.”

“Such are the differences among human beings in their sources of pleasure, their susceptibilities of pain, and the operation on them of different physical and moral agencies, that unless there is a corresponding diversity in their modes of life, they neither obtain their fair share of happiness, nor grow up to the mental, moral, and aesthetic stature of which their nature is capable.”

John Stuart Mill, On Liberty (1859)

So begins Yochai Benkler’s The Wealth of Networks.

Similar to Amartya Sen’s thought, he talks about freedom and development. Benkler opens by saying that information, knowledge, and culture are central to human freedom and development, and that how they are produced and exchanged shapes everything: how we see the world, who gets to decide things, what we can imagine doing collectively. And, hence it is important to understand the impact on markets and democracies. He says:

It seems passe´ today to speak of “the Internet revolution.” In some academic circles, it is positively naıve. But it should not be. The change brought about by the networked information environment is deep. It is structural. It goes to the very foundations of how liberal markets and liberal democracies have coevolved for almost two centuries.

He wrote this book in 2006, a decade and half after internet became a meaningful public phenomenon. His argument was that the internet represented not merely a technological shift, but a structural one in the emergence of a new information environment, where individuals are free to take a more active role than was possible in the industrial information economy of the twentieth century. For 150 years, he says, modern democracies depended on an industrial information economy for these functions. A small number of players (broadcasters, publishers, film studios, newspapers) produced information at scale, and the rest of us received it. Because the capital requirements were high, information production naturally concentrated into a relatively small number of institutional actors.

What shifted was the fact that in most advanced economies, there was a move to information centered economy, cultural production and brand based consumption. Second, the physical equipment required to produce and distribute information collapsed in cost. And with the coming of cheap processors and networks, the phenomenon of Internet.

This combination, an economy where information is the primary good, and where the means of producing and distributing information are now widely owned, created something new: what he calls the networked information economy.

For the first time, individuals and loose cooperative groups could produce and share information, knowledge, and culture at scale, without requiring either significant capital or institutional permission. He lends Wikipedia and Linux as examples, neither possible in the industrial model, both emerging naturally once the economics changed.

Individuals can reach and inform or edify millions around the world. Such a reach was simply unavailable to diversely motivated individuals before, unless they funneled their efforts through either market organizations or philanthropically or state-funded efforts. The fact that every such effort is available to anyone connected to the network, from anywhere, has led to the emergence of coordinate effects, where the aggregate effect of individual action, even when it is not self-consciously cooperative, produces the coordinate effect of a new and rich information environment.

His argument is that this is not merely a technological shift. It is a structural change that goes to the foundations of how liberal markets and liberal democracies have operated for two centuries. The old model required concentration because of economics. The new model makes genuine decentralisation  possible not as an ideology but as an economic fact. He says:

Even as opulence increases in the wealthier economies—as information and innovation offer longer and healthier lives that are enriched by better access to information, knowledge, and culture—in many places, life expectancy is decreasing, morbidity is increasing, and illiteracy remains rampant. Some, although by no means all, of this global injustice is due to the fact that we have come to rely ever-more exclusively on proprietary business models of the industrial economy to provide some of the most basic information components of human development. As the networked information economy develops new ways of producing information, whose outputs are not treated as proprietary and exclusive but can be made available freely to everyone, it offers modest but meaningful opportunities for improving human development everywhere.

 

 


 

What AI Changes

Now, the question that eventually led to this post. What has happened in the twenty years since Benkler wrote the book?

Two developments stand out. a) Attention economy and b) AI

The first was the rise of the attention economy. The internet did decentralise publishing and communication, but markets eventually discovered how to monetise attention at planetary scale. Human attention itself became a commodity: bought, sold, measured, optimized, and algorithmically shaped. When Benkler published his book in 2006, the global online advertising market was worth roughly $16 billion, a nascent industry, still finding its footing. Today, it exceeds $700 billion, accounting for nearly three quarters of all advertising spending worldwide. The commons that Benkler celebrated,  the open, non-market space of peer production and free information did emerge. But the market found the commons, and found in it something more valuable than content: it found human attention itself. Attention became the commodity that the industrial economy had failed to fully monetise.

The second development is AI. AI seems different from the internet. The internet primarily changed the distribution of information, AI changes the production and manipulation of outputs themselves. The internet allowed anyone to connect, communicate and publish. AI increasingly allows anyone to analyse, synthesize, generate, simulate, critique, translate, summarize, design, and reason.

“What is AI, in the sense that matters here? At its simplest, it is a system trained on vast amounts of human-generated text, code, images, and data, until it develops the ability to generate responses that are coherent, contextually appropriate, and increasingly useful across an enormous range of tasks. It does not think in the way humans think. But it produces outputs that, for many practical purposes, are functionally indistinguishable from thought. A large language model has read more than any human ever could — across disciplines, languages, centuries of writing — and has developed something like pattern recognition at civilisational scale. What it lacks is judgement, lived experience, accountability, and the irreducible particularity of being a specific person with specific stakes in specific outcomes. What it offers, in exchange, is breadth, speed, and availability.” – as defined by Claude.

But extending it further to the way it is going, it is an infrastructural shift in the world, the impact of which is yet to be felt across disciplines, industries, people, companies, governments. Electricity is perhaps the closest analogy. At first electricity appeared optional: useful, impressive, but not yet foundational, people being used to living without it. Over time it became inseparable from ordinary life itself. Entire industries, institutions, and social expectations reorganised around it.  Unlike the internet, which primarily changed how information moved between institutions, AI is beginning to change how institutions themselves function internally, (akin to again, electricity), by becoming a layer inside organisations, by being embedded in how decisions get made, like a new kind of institutional nervous system.

So how does it stack against the argument extended in Benkler’s book?  Some of the argument can be stretched forward. But a large part changes because AI is fundamentally different than internet. It is a layer of infrastructure being built.

Somewhere in the introduction to the book, he notes:

“In the industrial economy in general, and the industrial information economy as well, most opportunities to make things that were valuable and important to many people were constrained by the physical capital requirements of making them. From the steam engine to the assembly line, from the double-rotary printing press to the communications satellite, the capital constraints on action were such that simply wanting to do something was rarely a sufficient condition to enable one to do it. Financing the necessary physical capital, in turn, oriented the necessarily capital-intensive projects toward a production and organizational strategy that could justify the investments. In market economies, that meant orienting toward market production. In state-run economies, that meant orienting production toward the goals of the state bureaucracy. In either case, the practical individual freedom to cooperate with others in making things of value was limited by the extent of the capital requirements of production.”

This statement is where the argument about AI cannot be pursued on the same lines as internet. AI introduces a strange structural tension. Millions of people can now access capabilities once restricted to specialists. Yet the systems themselves remain extraordinarily dependent on concentrated compute, chips, energy, data, and capital: the industrial economy centralised concentration.

Benkler’s decentralisation thesis partially survives under AI, but AI simultaneously reintroduces concentration.  Where the internet decentralised distribution,  AI may decentralise capability while simultaneously centralising infrastructure.

In argument with Benkler’s introductory propositions, as to market, I believe it will find its way around AI the way it found its way around internet. Isn’t it something about the nature of markets?  Market that way is a natural system. It works aligned with nature’s principles. Any other system requires energy to maintain and sustain it, as it goes against the grain of nature. Markets work with nature. Markets will eventually find their way around AI.

 


 

Cross-disciplinary Principles, a sidebar

In another interesting project, I have been working with Claude on collecting a set of multi-disciplinary Principles. Principles arrived at after centuries of iterations with similar & unique problems that different disciplines face. Principles which I believe is a distillation of human thought struggling with the few underlying structural problems of matter and world, that of scarcity and resource allocation, that of information asymmetry, that of the tension between particular and general, that of the gap between intention and effect.

Disciplines eventually arrive at principles because reality exceeds static instruction sets. Rules cover the cases one can anticipate. Principles cover the cases one can’t or didn’t. I collected them not as a reference but as a contemplative practice. Each principle is the distillation of countless iterations where human judgement has been applied over many situations.They are compressed human experience, and are akin to learning from the wisdom of the ages applied across disciplines.

It makes a fine reading by itself for anyone concerned. One of the interesting findings is to see them together across disciplines threaded by the structural problems they address, how they converge. Filter by structural theme rather than discipline and the cross-disciplinary patterns become visible.

 


 

Suggested principles for AI Usage

Back to AI. As any person in the modern time and age, I’ve been thinking a lot about AI, about implications, about its nature and structure, about its usage and the changes it brings to the world. Like electricity, it is a fundamental shift in everything. It will soon change the way things happen in the world. In such a scenario, what are the considerations that will help ensure that we work and live with AI in the best possible way?

The thoughts in various conversations (Benkler, the impact of AI and Principles project) somehow fused together and here I am, arriving at certain principles for AI usage. Lets call it an evolving thought, open to debate, challenge and discussion. I suggest them tentatively, as something to be built upon and further refined.

 

I. The Principle of Human Discernment 

Every tool in history has required the judgement of its user. AI solves for efficiency, the most efficient path to an objective. It compresses time, surfaces information, produces coherent outputs at speed. But efficiency in service of what? That question is always the human’s to answer.

There are two parallel thoughts that I hold here. First, as Mill notes, that human nature is not a machine to be built after a model, but a tree, which requires to grow and develop itself according to the tendency of its inward forces. Each person is a unique specimen of particular pleasures and pains, shifting objectives, irreducible individuality. Each person has their own unique objectives. And second thought is  that although AI solves for efficiency, AI can be trained. To see its own blindspots better than perhaps humans can be trained, to critique itself, and under human guidance it can blossom away from the sameness that otherwise it seems to head towards.  AI is neither just like language, nor just like an infrastructure, or an educational system. It is more dynamic, more available to training and change, and can be adapted differently by each organisation that uses it.

As in other cross-disciplinary principles that evolved with iterations, principles of AI usage will also emerge – where one of the first one would be application of human discernment and human judgement.  AI can optimize within frames as long as humans remain responsible for choosing the frame.

The discernment of how to train it. Perhaps on the basis of all the other principles linked above, is still that of the human wisdom. AI expands capability but weakens friction. Therefore human judgement becomes more important, not less. Human discernment and judgement in epistemic responsibility, intentionality, taste, framing, selecting objectives, recognising context, moral judgement and knowing when not to automate.

Human discernment as an underlay and overlay to AI usage.

 

II. The Principle of cross-platform Critique and Audit

The insight underneath this is that different foundation models have different blind spots because they were trained differently, on different data, with different choices. By asking one model’s output to be critiqued by another model, one will find nuances hitherto unseen. It not only improves the overall output but also at times, the models will systematically disagree in ways that are informative. A response that three different models converge on is more likely to be grounded than one that only one model produces. A response where they diverge significantly is precisely where human discernment (the first principle) is most needed.

The other key aspect of this suggested principle is to audit the output implying accountability, not just accuracy. An audit allows for a higher standard and a more useful one for consequential decisions. It also addresses the more epistemic problem, whether the AI’s reasoning is trustworthy, or are its blind spots invisible to itself. It uses diversity of model as a check on any single model’s priors.

Slowly building a system of checks and balances like the traditional principles of law, accounting propose. This not only checks but can work in constructive ways as well. Not only allowing for blind spot checks but a nuanced learning from all the best there is to offer through the synthesising powers of AI.

 

III. The Principle of Graduated Trust

Deploy AI where reliability is high and failure is recoverable. Refine the boundary continuously.

Not every domain is equally ready for AI, and within each domain, not every case is equally appropriate. AI accuracy tends to follow something akin to a Pareto distribution: eighty percent of routine cases, in most fields, reach sufficient reliability relatively quickly. The remaining twenty percent (the high-variance, novel, high-consequence cases) take much longer, or may never be fully handleable by a model trained on what has already happened.

It is the long tail of the unique cases.  The 20% that AI cannot handle reliably is not randomly distributed. It’s systematically the high-variance, high-consequence, high-novelty cases. So the principle isn’t just “deploy where reliable” but “deploy where reliable and the failure mode is recoverable.”

The refinement over time aspect is what makes it Pareto-like rather than just a competence threshold. As accuracy improves in a domain, the boundary shifts: more cases become handleable, the human reserve shrinks, but never to zero because there will always be a distributional tail the model hasn’t seen. All the more the importance of first principle.

 

IV. The Principle of Preserved Friction

This evolved from a personal credo. That not all friction is inefficiency; some forms of difficulty are developmental.

There is a kind of undocumented learning that happens when one grapples directly with material over time: reading deeply, writing slowly, struggling with first principles, sitting with raw facts before synthesis arrives. The work shapes the thinker even as the thinker shapes the work.

AI removes enormous amounts of cognitive friction, often beneficially. Tedious labour can now be accelerated or eliminated altogether. But some forms of friction are not merely obstacles to output. They are part of how judgement, taste, intuition, memory, and understanding are formed and developed.

A world optimized entirely for cognitive convenience may inadvertently weaken the very faculties required to use AI well. The challenge therefore is not rejecting AI assistance, but preserving sufficient direct engagement with reality, thought, and creation that human cognition continues to deepen rather than merely accelerate.

Safeguard raw time with things.

 


 

And a fifth principle, which the other four presuppose, as suggested by AI.

V. The Principle of Transparent Training

The above  principles operate at the level of use. What they don’t address is the training layer — who decides what the root system looks like before any organisation fine-tunes it. The audit principle could in principle extend there: foundation models whose training choices are themselves subject to cross-model and cross-institutional scrutiny, not just their outputs. A kind of constitutional audit of the model before deployment, not just of its responses after. That might be the fifth principle — transparency and auditability of training, not just of inference

The root system matters as much as the branches. Auditability of what a model was trained on and for is the precondition of meaningful oversight at every layer above it.

This one is less for individual users and more for the industry itself: transparency about how models are trained, what values are embedded in them. It is beyond most of us to implement. But it is perhaps the mirror the industry needs to hold up to itself.

 


 

To conclude,

The above is not a complete framework. It is a beginning, a first attempt at foundational principles which uphold the entire usage structure.

The principles of any discipline take time to settle, through use, through failure, through accumulated judgement about what actually went wrong and what corrected for it. We are at the start of that process with AI.

What I find hopeful is that the structural problems AI poses are not entirely new. They are problems we have encountered before, in other disciplines, in other material. The deeper question is what becomes scarce once intelligence-like capability becomes abundant.  Industrial society made physical capital scarce. The internet made attention scarce. It seems that the AI era may make judgement scarce.

And if that is true, then principles may matter more, not less, in the age of AI.

 

 

 

 

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.