Posted by salman on: 6/23/2019, 8:37:03 PM

tl;dr: A description of the level playing field created by the personal server paradigm.  

Saving Capitalism from the Capitalists.
               L Zingales & R. Rajan

Ultimately, any for-profit entity would like to become as close to a monopoly as possible – that’s how they can charge the most for their product and make the most profit. And web services companies have all the right ingredients to become quasi-monopolies in their domain: highly scalable services, zero marginal costs, a dispersed customer base (of users and/or advertisers) who have little bargaining power, ad-supported zero-dollar-cost services, high switching costs with network effects… All of these ingredients can make web services great businesses. Ironically, just as capitalism and internet-economics reinforce companies’ monopolistic tendencies, such monopolies inevitably stifle innovation and over time, they blunt the greatest advantages of market-based capitalist economies – that is, the dynamism and innovation brought on by strong competition.

In contrast, the personal server paradigm can level the playing field, and force technology companies and service providers to continuously compete to deliver the best value proposition to end-users. In the previous post, I hinted at how that would work at the application level:  Because you can easily switch apps without losing your data, you are not locked in to any particular interface or any particular company creating any particular app. Developers and companies can continuously compete to provide better interfaces to the same basic application functionality. To make an analogy to the web services model, this would be like being able to use the Snapchat app to message your friends on Facebook and retaining access to your data across both platforms.

As importantly, the personal server paradigm can also create competition for back-end infrastructure. Because you have full app-portability and data freedom, you can easily change where you host your personal server. You can of course host it on a computer sitting at home. But more likely, most people would likely host their personal servers in the cloud, using service providers like Amazon, or Google or Microsoft. The difference here is that because you can easily switch your personal server provider, they would not enjoy monopolistic control over you. So they would need to do everything they can to compete for your business and offer you better services and / or lower prices. One could imagine a Dropbox or Amazon offering this service with different prices based on the amount of data you are storing. Alas, Google might offer to host it at a lower price if you give them the right to scan your data and serve advertisements to you. And most importantly, each of these would compete to convince you that they are the most secure guardians of your data. Your privacy and control would not be driven by government mandates but by the powerful forces of competition.

This scenario is not as fanciful as it may seem at first read. Today, Amazon and Google Microsoft and others are already competing to provide cloud services to their corporate customers. And although they all try to entice their customers with special offers which locks them into their platforms, it is actually quite simple to switch. 

For example, I had my personal server, originally hosted on Amazon – they had offered me a year for free. It took me only a few minutes to switch it over to try Google, and then a month or two later, I easily switched to Heroku. (In this case, I had kept the same data base and file servers.) In each case, my domain was also switched to the new provider and so any links I may have had (such as would be unaffected by the move. Of course, it took some rudimentary technical knowledge to make these switches. But then again, these services are not aimed at consumers today – they are targeting developers who have the technical knowledge required. Even so, it only took some minutes for me to switch web servers and I didn’t have to use any fancy software – it was all via a point-and-click web interface. This is all the result of the more level playing field these companies face in the corporate cloud services market. 

What’s more, this competition has made setting up a web server not much more complicated than setting up an account on Google or facebook. (see It’s all done via a web interface, and it only takes a few minutes. If anything, it has become slightly harder in the past 2-3 years to set up a freezr personal server on various services, because these services are focusing more and more on larger corporate customers, rather than amateur developers who want to set up their own servers. This is also reflected in the price tiers offered by such providers. For example, Red Hat Openshift’s trial jumps from 0 or trial usage to more than $50 per month – a jump to the high levels of usage more appropriate for corporate servers than personal servers and budgets. Clearly, if the personal server paradigm becomes popular, and many people require hosted personal servers, these same providers can easily tweak their offerings to make them even more consumer-friendly, and offer more attractive prices to them. Meanwhile, despite the fact that these companies are not targeting the (as of yet non-existent) personal server market, viable hosting solutions can be had at less than $7 per month with Heroku for example, or even potentially for free at basic usage levels. (Google Cloud recently made changes to their offering that should make it quasi free at low usage levels.) 

What is amazing about competitive private markets is that they help to level the playing field – something which is markedly lacking in the technology world today. Companies like facebook and Amazon and Google have amazing technologies and amazing engineers developing their offerings. Wouldn’t it be nice if they competed to gain our loyalty without locking up our data?

Previous Post Next Post

Posted by salman on: 6/10/2019, 8:48:48 PM

tl;dr A theoretical framework for dis-aggregating the web services stack and separating front end apps from back-end servers (databases, files, permissioning).  (Part of a series of posts.)


developers, Developers, Developers, DEVELOPERS, DE-VE-LO-PERS !
                    S Balmer

When I took programming back up a decade or so ago, I fell in love with JavaScript. The learning curve for JavaScript is such a joyful ride of wonder. We can just open a console in our browser and start writing code. We can create complex web pages on our local drive using a simple text editor. Step by step, we are empowered to create ever more sophisticated interactions. 

But then we get stuck when we want to store a piece of data in a database or a save a file in file-system, or if we want more than one person to use our new web page. Suddenly, we have to learn to set up web-servers and database servers and file servers and configure all of these so that they work together and we have to administer them so they don’t break down. 

It struck me that for almost all apps that I use or imagined I wanted, the bulk of the unique value proposition lay in the front-end interface. Besides that, the basic work of writing and reading from databases and file systems and administering the services all seemed common across all apps. I don’t mean to say that all the unique back-end processing done by web-services companies are of zero incremental value to me. I am just suggesting that many apps might work quite well with only a restricted set of core generic back-end functions.

Let’s define the “front end” portion of an app as a package of html, css and javascript files that define the interactions with the user. Then, let us assume that these apps could call on a set of common commands to access file and database servers. I will use the freezr namespace to define those commands as follows:

  • freezr.db.write (data, options, callback): Write data to a database. Returns an object-id.
  • freezr.db.getById (data_object_id, options, callback): Retrieve data based on its id.
  • freezr.db.query (options, callback): Query the database.
  • freezr.db.upload (file, options, callback): Upload a file to the server
  • freezr.utils.filePathFromId (fileId, options): Create a path to be able to retrieve the file

Think of the many applications we use to store our personal content - be it a note taking app, a blogging or tweeting tool, a simple spreadsheet, a photo storage and sharing app, or any collaboration or messaging tool. I would posit that each of them could provide a great user-experience with just these few commands, and almost no other back-end functionality. Assuming there is a server that can handle the back end of these commands – that is, reading from and writing to databases and file systems - the apps themselves could all be reduced to a simple zip file containing html, css and JavaScript files running on the front end*.

I propose that personal servers have the capability to “install” such apps. In other words, you should be able to upload a zip file of html/css/js files and have it serve those pages to you, so you can “use” the app. In such a model, the app become fully portable and autonomous. I can change the app if I want. I can move it to another server and a completely different back end environment. I can delete it. I can share it. And most importantly, I can do all this without needing the permission of the app developer or anyone else. 

The portability of apps from one server to another, based on a common set of back-end commands also changes the dynamics of app distribution.  A developer can be confident that their app can be installed by anyone with a server that accepts the defined APIs. In the same way that developers know that their ios app can be installed on all the millions of iphones out there, they will also know that if they build their apps using the standard APIs, it can be installed on all servers that accept such an API.

This common interface also empowers a much larger set of developers – be they “newbies” or front-end experts. It removes the intimidating barrier of setting up and administrating servers, and gives developers the ability to create new apps and iterate on other apps. Today, motivated newbie developers can pick up JavaScript techniques by looking at the JavaScript code of web pages they like. Such a common system would allow a developer, not only to learn from other web pages, but to replicate them in her own apps, and thus recreate similar apps with improved features, or with a slightly different set of design principles or functions aimed at a particular use case. 

I would also suggest that what you lose in back-end sophistication by creating a common interface to the server, you can gain in making apps easier to write, in reducing barriers to app creation and distribution, in creating app portability, and in fostering a more dynamic environment of app iteration. This inevitably leads to greater creativity among markers of apps, which could kickstart a virtuous cycle of consumer adoption and new app development.

What’s more, besides giving us a greater variety of apps, this platform would free our data from the shackles of the web services model…

Previous Post Next Post

* Some exceptions and caveats to the no-back-end thesis:

  • Clearly, a permissioning system is necessary to allow the people you want access your data, like a message or a photo. This is part of the core functionality of the back end.  (See next post.
  • As mentioned, back-end service can no doubt be of value – when facebook or Twitter use algorithms to show you relevant posts, or when Google Photos highlights the best photos in a series. It would be nice to have localized ML running on our data if we let it do so. This shall be dealt with in a future post on plug-in services. In the long term, it is about defining these standard services accessible to all apps, rather than yearning for proprietary backends inextricably tied to the front-end interface.
  • The trend towards frameworks such as React and Angular is not fully compatible with this vision. Of course, using them as front-end libraries is easier to envisage. Integrating them as back end services is technically feasible but would run counter to the philosophy of the service. (See post on Extensions

Posted by salman on: 6/10/2019, 8:46:42 PM

tl;dr A theoretical framework for dis-aggregating the web services stack and separating front end apps from back-end servers (databases, files, permissioning).  (Part of a series of posts.)

There are only two ways to create value: One is to bundle; the other is to unbundle. 
               Paraphrasing J Barskdale

The current web services paradigm relies on a vertically integrated and proprietary set of interfaces and technologies (even if it leverages open source components.) For example, when you access, facebook’s servers send you some files with largely proprietary protocols that define the interactions of the web page and communicate your data back to facebook. Naturally, facebook doesn’t want third parties to take over its core functionality, nor to freely communicate with its servers (except via well-defined third-party Interfaces which can be turned off or changed at its own discretion.) In return companies like facebook assume the security and scalability of their backend infrastructure - they store your data on their own servers and use their own proprietary software to analyse the data as they please, both to improve their services to you, and also to monetise your interactions with advertisements.

The personal server paradigm I am suggesting would dis-integrate (or unbundle) this vertically integrated stack. Instead of accessing, say, a note taking app on Evernote’s servers, Evernote would create an app which you could download and install on your own personal server, much like you could download and install an app on your own computer or on your phone. However, the app would store your notes, not on Evernote’s servers, but on your personal server.

Such an unbundling of web services has a number of advantages, the most fundamental of which is the degrees of freedom it creates by separating the user interface (or app) from the storage of data. For example, under such a schema, you are free to delete your data or turn off your server so no one can access it. More interestingly, you could also delete your note taking app, without deleting your data (i.e. your notes); or you could install a new note taking app, which was created by another developer with a better interface, and grant the new app permission to access your old notes. Or you can even install an independent app that has access to your notes and analyses your note-taking habits. (I will discuss these data freedoms in more detail in a later post.) 

Another advantage of disaggregation is that it removes barriers to entry, and allows more actors to compete more easily to provide better apps. Today, all app and web site developers have to grapple with data and web-site security, which they have to provide to their users. But it is hard to be both a security expert, as well as great app developer. And at the least, it takes quite some resources to manage security effectively. It is no wonder that all but the largest companies seem to experience data breaches. In a disaggregated stack, a developer that focuses on a front-end app need not worry about managing users and their security, nor to safeguard its users’ data. In this model, the security is dealt with by another layer in the chain, so it is easier and less expensive for diverse app developers to introduce new apps (especially since incumbent apps don’t have a lock on your data.) App developers can specialize on their area of expertise, unencumbered by adjacent horizontal layers of the value chain. And they can compete to provide the best service to us!

Of course, anytime you slice through a vertically integrated stack to create autonomous horizontal units, you give up (at least initially) on some of the advantages that integration brings. Experienced developers will probably shiver at the thought of not controlling back-end functionalities of an app. However, precedence shows that creating simplicity, defining common interfaces between stack elements, and removing barriers to app-creation can also unleash much creativity and generate value propositions where none existed before. 

Previous Post Next Post

Posted by salman on: 6/10/2019, 8:38:04 PM

tl;dr Blockchains are centralised in some fundamental way – the web is fundamentally decentralised.  (Part of a series of posts.)

For years my heart inquired of me 
      Where Jamshid’s sacred cup [≈ Holy Grail] might be,
And what was in its own possession
      It asked from strangers, constantly;
                Hafez, as translated by D Davis

There is a recurring story in Persian literature: how we search for some “ideal” in distant places - be it a Holy Grail or a divinity - only to realize that it lies within ourselves, and that we possess it already. In Attar’s Conference of the Birds, the birds of the world set out to find the legendary Simorgh to guide them. They pass through the seven valleys of Quest, Love, Knowledge, Detachment, Unity, Wonderment and Poverty / Annihilation, only to learn that that the 30 birds that make it to the end are themselves the Simorgh  - a play on words as simorgh also means “30 birds”. What they had been seeking elsewhere, they possessed within themselves all along.

Sometimes, when I read about some of the efforts people are making in the blockchain world to seek a decentralized ideal, I get reminded of the Conference of the Birds, as I believe that the web, as it was originally conceived, is indeed, already decentralized in principal– and in many ways, it is more decentralized than blockchains. 

Of course, blockchains are an amazing innovation and one expects that they can play a significant role in a decentralized future. But they are not a decentralized remedy to all centralized systems. In fact, blockchains are only decentralized in certain ways, and they are hyper-centralised in others. Vitalik Buterin himself states that “Blockchains are politically decentralized (no one controls them) and architecturally decentralized (no infrastructural central point of failure) but they are logically centralized (there is one commonly agreed state and the system behaves like a single computer).” ( The key here is that by using blockchains, we are effectively sharing the same computer – so in that way, being part of a blockchain is being part of a hyper-centralized giant computer. 

To use a simple example, let us imagine we are 5 people wanting to keep our precious, hand-written, leather-bound diaries safe. In a centralized world, we would all put our diary in one central safe, and give someone a key. The centralized solution places trust in one central key holder. The way we imagine decentralization should work is that we would each have a safe with each of our diaries in it. But that’s not how things actually work in the world of blockchains. With blockchains, we would each get a copy of each other’s diaries and we would each keep all of the other diaries in each of our safes. This makes blockchains very decentralized in some ways – there are five copies of the diaries spread out in the safes. But this would not be a good way to keep our personal diaries - we would be storing a copy of our private diaries with 4 other people, who could read it.

Admittedly, in the digital world, the diaries would be encrypted and unreadable by others. But given that previous blockchains are always accessible – ie they are data-retentive - each of us would have all the time in the world to decrypt the other 4 peoples’ diaries. (One of the brilliant innovations of bitcoin was to put time limits on “mining” thus limiting the amount of time each player has to solve a decryption puzzles and to create the next block. But the time limit was designed for a specific purpose. It was not designed to keep previous blocks private – only to make them subsequently irrelevant. This works well if you are keeping transactions and public materials, it doesn’t solve the diary problem above.)

Blockchain enthusiasts would also argue that none of the major blockchain initiatives are suggesting that personal private data be kept on blockchains in the way I have characterized  above, and many are trying to solve the issues above. And that would also be fair. But the example above serves to make an important distinction. The “trust” and “decentralization” inherent in some aspects of blockchain technology cannot be used blindly outside the context of the way they are used in the blockchain technology. I have heard too many blockchain enthusiasts describe blockchains by saying that it is like storing your google photos or facebook posts on a decentralized network. This is akin to saying you will store your diary in everybody else’s safe. 

I am making this distinction  only to try and resolve the above marketing obfuscation or simplification. The way blockchains work, if we put a piece of our data on them (1) our data will remain there forever and (2) our data loses its autonomy because it is part of the large single computer which we cannot control, and from which it can never be extracted. So blockchains violate critical data-freedom principles I had outlined previously. Data on blockchains is neither free, nor mobile, nor autonomous. 

That doesn’t mean that blockchains cannot be used to create a market for data storage solutions for example, or to keep timestamped hashes of posts for verification, or that the technology won’t be adapted to address the diary problem in some new way. Each of these innovations may help to unlock a distributed storage market, but they would not be using the those parts of the blockchain technology that have made it so special – the fact that it resolved “trust” and “decentralized control” by replicating copies of your data multiple times and keeping it forever under other people’s control. 

So if we assume that data is not to be kept directly on a traditional blockchain, then we can agree that we would still need a place to store our private data – a place which would allow it to retain its freedom.  And I suggest that at the least, one viable solution is for our data to safeguarded by using the same web-based technologies that we already have and that we understand and that have matured over the past years, using the same open source layers and open protocols that have enabled such amazing services to come into being over the past decades. 

There may certainly be a role for blockchains within a newly decentralized ecosystem, but if we want to store our data and bestow upon it the freedom it deserves, we need not necessarily travel the seven valleys of new technology development, asking strangers for the Holy Grail of decentralization… because what we are seeking we already possess in our own web technology stacks. 

Previous Post Next Post

Photo Credit

Posted by salman on: 6/9/2019, 2:20:10 PM

tl;dr Defining data freedom and the .json manifest that goes with it. (Part of a series of posts.)

Data is born free but everywhere it is in chains. 
            Paraphrasing JJ Rousseau

The prevalent web-services paradigm allows application creators to lock our data into their proprietary platforms. In contrast, personal servers can free this data by disaggregating apps from their underlying data. 

Data can indeed by free - not free as in “beer”, nor free as in “I’ll give you my data for free if you let me visit your web site”, but free as in free speech, free as in mobile, free as in autonomous and free as in accessible:

  • free speech: You are free to do what you like with your data (the overarching principle.)
  • mobile: Your data is fully portable (as are all apps), so you are free to easily export a copy in electronic format and to use it on another server or device. And you can choose to delete any and all parts of it as you please. In practice this means you should be able to get a copy of your data from your servers by clicking a couple of easily accessible buttons.
  • autonomous: Your data is neither dependent on the application that created it nor on the server environment in which it resides. So, there is no vendor lock-in at any level, nor any lock-in to any underlying operating system or other software infrastructure. You are free to allow other apps to use the same data and thus expand functionality by treating the data as an autonomous entity.
  • accessible: This does not mean that everyone is free to access your data, but that you are free to choose who can access your data and under what conditions and using which apps. In practice, this means that your server needs to help manage and control authorised access, and to get the requisite permissions from you to do so.

I suggest we think of data freedom among these principles, and to ensure that personal servers adhere to them. 

Of course, autonomy does not imply that our data should be isolated in an inaccessibly private database. It is consistent with these principles to allow apps and their data to be accessed by other people and by other apps.

An app which adheres to these principles needs to have a common convention for telling the server the kinds of permissions it is seeking from users. And the personal server software needs to manage those permissions. So each app package (ie the zip file of html/css/js files discussed above) needs a configuration file (like a manifest) which lays out its data scheme and permission requests. When you install an app, your personal server asks you if you would like to grant these permissions (much like smart-phone apps do), and you can choose to do so, if you like.

Of course, apps wouldn’t need permission to be able to read and write data related to the app itself for any user of that app. In other words, if you install a game app on your personal server, it should be able to record your score to your personal database without asking for your permission. But if you want to see other people’s scores for the same app, they need to grant permissions for you to access their scores. Or if you install a leaderboard app that aggregates all the scores from different apps to show you how you are doing in all games, you would need to grant permission to the leaderboard app to access your score in all the other game apps on your server. If you want to publicize your score in a public leaderboard, you should also be able to make it public. And if your game app allows you to upload a video of you playing the game, you should be able to grant access to friends you have validated, so they can watch the video (or to make it public.)

A preliminary specification for the configuration file can be found here. Specifically, the “permissions” key can be used by apps to define what permissions the application is seeking. 

Here is an example of what the permissions requests might look like: 

"permissions": {
"top_scores": {
"description":"Player Top Scores", // description appears to users when they are asked to grant permissions
"collection":"scores", // the database table you are giving access to
"return_fields":"score","_creator","_date_created"],// the fields that can be shared
"type":"db_query", // the type of permission – in this case, the permissions allows a collection to be queried under specific circumstances.
"sort_fields":{"score":-1}, "max_count":1, // specific to the “db_query” type permissions
"requestee_app": null, // apps can also ask permission to access other apps here – defaults to the same app.
"sharable_groups":["public"] // defines who can access this.
"selfie_share": {
"description":"Share video of you paying game",
"type":"object_delegate", // the type of permission – in this case, giving access to the video file
"sharable_groups":["logged_in"],// sharing with everyone logged in to your server.

Ultimately the goal is to give a maximum amount of functionality to the app, by defining a set of common interactions patterns with the back end. The configuration file is where each app can outline the back-end interactions it seeks. The file can also be used to provide optional meta-data about the app, and the data structure it uses and to define the css/js files associated with each html page.(More details here

Specifically, the Permission schema in the configuration file serves to safeguard the freedoms stated earlier – specifically to ensure mobility and autonomy. It provides the basis for the mechanisms allowing an app developed by a new developer to replace an older app developed by someone else. 

This sets the level playing field.

Previous Post Next Post

Posted by salman on: 6/1/2019, 4:25:19 PM

tl;dr This post is an introduction to the  personal server paradigm, and the first of multiple posts related to

- 1980's - A (personal) computer on every desk
- 2000's - A (smart) phone in every hand.
- 2020's - A (smart personal) server for every soul?

Imagine a world where each of us has a personal server. Our server would host the apps we use; it would store all our personal data; and it would communicate with other servers in any way we ask it to.

The unfortunate (and somewhat accidental) arrangement in today’s prevalent web-services architecture is that the “apps” we use are hosted on servers which are controlled by other companies (such as Twitter or Facebook among countless others.) In a different paradigm of personal servers, the “app” I would use to tweet (for example) would be installed on my own server rather than on Twitter’s, and my “tweets” would sit primarily in my own personal database. 

It should be self-evident that such a personal server paradigm is more consistent with the original concept of a distributed World Wide Web. This personal server paradigm also provides a better human-computer interaction model than the web services model prevalent today. By being masters of our data, its apps, and its environment, we have the possibility of setting our data free – free to interact with other apps we choose and control, free to stay offline unbothered by data-mining services we don’t like, free to be transported to other environments and servers, and free from the shackles of the third party servers we do not control.

For this new paradigm to be adopted, all such servers would need to be built on a common platform - a platform which is not dominated by any one company. Such a personal server should adopt the models used by the likes of Linux and node.js, rather than “Windows Media Server” for example, though it does need to be consumer friendly. 

Amazingly enough, many of the underlying elements for such a platform are already in place and are commonly used: the open source server stack built on node.js, the prevalence and sophistication of front-end javascript, cloud-based services which make it almost consumer-friendly to create web servers, file servers and NoSQL databases with common base of functionalities, and even consumer-oriented concepts such as ‘apps’ and ‘permissions’ which have been popularised by smart phones.

I posit that only two additional conventions would need to be adopted to coalesce these various components into a common platform for personal servers:

  1. A front end API – a set of simple commands that would allow a front end javascript program to access back end services.
  2. A common set of schemas for server URL paths and (JSON) keywords, so apps can declare their various characteristics such as their data models and the permissions they seek from users.

In a series of posts, I will provide a high-level specification for the above elements, and also discuss the structural advantages that they might engender. Most importantly, this model gives primarily control of data to each consumer while also freeing the data to create much more value to consumers.  The model fully disaggregates front end development from back end storage and processing, creating better incentives for service providers of all sizes as well as individual developers to focus on specific parts of the value chain and offer new innovative technology solutions and business model choices.

Some four decades ago, we were tickled by the idea of having a personal computer on every desk. Two decades later we realized that each of us would end up having a smart phone in our hands. Is it too preposterously ambitious to imagine a personal server at the service of each of us?  

Next sections:

Posted by salman on: Invalid Date

tl;dr We need to be careful of the faults of decentralised systems, yet reassured by the strength of the principles underlying them. (Part of a series of posts.) 

Take up the Monopolist’s burden —
And reap his old reward:
The blame of those ye better,
The hate of those ye guard
Paraphrasing R Kipling

It can be instructive to compare Tim Berners Lee to Mark Zuckerberg. For example, why aren’t there any media articles holding Tim responsible for all the web sites that carry misinformation and viruses? He did invent the web that propagates all these horrors, didn’t he? Similarly, why don’t we hold the ARPANET or the US military responsible for all the email scams we receive in our inboxes? And yet we hold Mark and his company responsible for all the terrible things taking place on Facebook. Isn’t he just providing a communication utility and platform much like email and the web? Isn’t he just reaping the old reward of providing a great way for us to connect to each other using our real identities? Do we not remember the days before facebook, when it was impossible to verify identities on social networks, the days when we could not easily find nor connect to our old friends? Why should it matter that it is one company that has attracted so many users and presumably created value for them, rather than a decentralised system like the world wide web, which is based on a public communication protocol? 

There are many ways to think about these questions, but I pose them here to first make the point that decentralised systems too can also be plagued by a variety of problems – identity theft, ransomware and child pornography web sites are among the dark sides of the internet.   

And in many cases, the solutions to the problems plaguing open networks seem much harder to resolve than centralised ones. If we think there is an issue with Facebook or Twitter, we know that there is a company behind them that controls all the software running their web sites. We know that it is technically possible for Twitter or facebook to ban a “bad” actor from their sites if need be. So, we can shout and write nasty articles and sue them and ask Mark to “fix it, already”.  But if we are outraged by a nasty web page or get caught in a phishing email, there is no one to shout at - no one person or company we can point at to solve the problem or to ban a web site. And the more decentralised and the more rigid a decentralised system may be, the harder it will be to solve the problem. This is something that any proponent of any decentralised system must be continuously wary of. 

We should all be careful what we ask for.

Yet, problems on decentralised networks can also get solved, even if there is no one person we can ask to solve them. Take spam. A few years ago, it was quite common to get emails from Nigerian princes for example – a problem not totally dissimilar to the misinformation plaguing companies like Twitter and facebook today. And in this particular case of email spam, the fact that these emails are rare today seems to indicate that the participants in the decentralised email protocol succeeded in solving the problem. Yet, as pointed out to me by a very knowledgeable friend, the resolution of email spam cannot be really used as an argument in support of decentralised power structures in general –the reason this particular problem was solved is that a very few large companies dominate email services. So, as my friend pointed out, it was not the dispersion of the decentralised email protocol that drove the resolution but the power of the large oligopolies dominating the service. Although this may be true, I would argue that the concentration of players in the market is not necessarily the critical indicator here. Rather, it is underlying market structure they are operating in, and the system of governance surrounding it. Even if it was a handful of trillion-dollar behemoths that solved the email spam problem, they were on a level playing field competing to offer better services to their users and to solve such problems for them. Imagine an alternate world where Google had invented email, and that it controlled 100% of all email traffic from the get-go. Then, could we have expected Google to resolve the email problem in a new or innovative way? Isn’t it more likely that they would be entrenched in the way they had done things in the past, that they would be constrained by the business models and methods that had allowed them to dominate 100% of email traffic, and thus be blind to new ways to solve the spam problem? 

Indeed, monopolies do stifle innovation.

But stifling innovation is not the only problem with monopolies - it is the values, the ethics and dynamics that are reflected in the underlying market structure and their systems of governance. It is as much a philosophical question than an economic one. 

As our lives are increasingly being led online, our interactions and our data-trail are also becoming part of our Being. Our Self is reflected in, and to some extent even defined by its existence on the internet. So, it becomes even more important to think of the systems of governance we are creating through the lens of political theory. 

The questions facing us now are not dissimilar to the existential dilemmas which we faced in the mid 20th century, when we grappled with the egalitarian promise of centrally planned communist economies versus the seemingly unjust and certainly unruly and messy market economies of Western democracies.  It was not just that centrally planned economies stifled innovation and created inefficient economies. It was also about the structure of the system we were striving for – the values it incorporated, the rights it bestowed on citizens and the freedoms it upheld.  

We can also draw analogies to the early 20th century, when Europe was still a dominant Colonial power. At the time, a pro-Colonialist might have argued that things would be much “worst” under a “native” ruler, and point to the many good things Western Civilisation had burdened to bring to the colonies. Arguably, using a measure like GDP, that assertion may well have been correct. For the sake of argument, let us assume it was - that indeed, colonialism led to higher GDP. Similarly, we can even assume that by some measures, the Colonial leadership provided more effective management, and a superior legal system for its colonised subjects. I would digress to note that an officer of the highest honesty and moral fibre within a particular world-view can be seen to commit heinous crimes from other perspectives. But even if we ignore this dissonance, even if we assume that the autocracy of Colonialism engendered an orderly system which created greater wealth for the colonised and a far better legal system, where incorruptible courts could punish and ban “bad” actors, we would still be wrong. We would still be overlooking those most valuable attributes that were of paramount importance to the colonised: their freedom and their autonomy.

So too, with data. 

Previous Post Next Post


Posted by salman on: 3/14/2019, 4:10:12 AM

tl;dr: It would be too easy to flood a personal server with too much functionality.


This is a placeholder for a post - part of a series:

Previous Post Next Post

Posted by salman on: 3/14/2019, 4:10:12 AM

tl;dr: A discussion of safety issues...

Szell: Is it safe?
Babe: Is what safe?
Szell: Is it safe?
Babe: Tell me what the "it" refers to.
Szell: Is it safe?
Babe: I don't know what you mean. I can't tell you something's safe or not, unless I know specifically what you're talking about.
Szell: Is it safe?
Babe: Look, I told you I can't... [Szell stabs the probe into the nerve] AAH! AAAH! Aah!
Szell: Is it safe?
Babe: No. It's not safe, it's... very dangerous, be careful.
Marathon Man (1976) (Slightly re-ordered)

This is a placeholder for a post - part of a series: 

Previous Post Next Post

Posted by salman on: 8/26/2018, 2:15:00 PM

When Cary Welch was a young man, he spent much time in the Islamic and Asian wings of museums. As he recounted many years later, this was not necessarily because he was more attracted to that type of art, but because fewer people visited those sections, and so he could immerse himself in the beauty of the art, unencumbered by the hordes who hurriedly bustled in and out of the well-trodden sections of the museum. He could sit there at peace, contemplating these masterpieces, sometimes soaking in the intricacies of a minor detail, and at times, pondering the spirit of the composition as a whole. 

For those of us who took Cary’s class on early Safavid painting in the early 1990s, we would remember most, not the slide shows of photographs he had painstakingly taken of so many works of art in so many places, nor the names of any one painter or painting. What left the strongest mark was being able to stroll after class into a room that housed part of his private collection, and follow his cue to spend the time to gaze at these original masterpieces, learning to pause, learning to contemplate and to appreciate, learning to see.

By that time, Cary had amassed an amazing collection of Persian and Indian art, parts of which he donated to Harvard (where he was the curator emeritus of Islamic and Later Indian art Art.) But when he was buying these pieces, there was little scholarship on them. His purchasing guide was his eye for beauty. Of course, he studied these works, and among other things, identified the various painters by their styles, as few of those artists signed their works, thus laying the foundations for this field of study. But he would understand much more from these paintings just by looking at them.  At one point, he had deduced the nature of a political turmoil in Safavid history by peering into the soul of a painter whose style had shifted – something historians would only ascertain later. He also collaborated with more traditional academic scholars to pioneer the field of Islamic and Indian art. Notably, with Princeton’s Martin Dickson, he authored the “Houghton Shahnameh,” a magnificent exposé on a folio of masterpieces, which decorated a 16th century manuscript of the 10th century Persian epic. 

Cary Welch passed away 10 years ago, at the age of 80, while travelling in Northern Japan to discover the beauty of its picturesque landscapes. 

So today, as we get bombarded by news, both fake and real, where nary a moment passes without an app notification vying for our attention, and breaking our concentration, as we hustle towards the next next big thing as quickly as possible, it would not be time misspent to take a moment and remember Stuart Cary Welch as a young man, sitting alone in the Met, losing himself for a few hours in the intricacies of a 16th century Persian painter’s brushstrokes. We might want to pay tribute to a person who took the path less travelled by, not for the destination to which it might lead, nor the difference it might make, but for the serenity of the journey itself. We might all wonder if sometimes, rather than move fast and break things, we might want to stroll slowly and appreciate them.

Folio from the Shah Tahmasp Shahnameh, attributed to Aqa Mirak, circa 1525-35, previously owned by Stuart Cary Welch. (In 2011, 3 years after Cary’s passing, the painting was sold for $12M.)

A version of this post originally appeared in a reunion edition of the Harvard Art Journal last May.

Earlier posts